Test Report: Docker_Linux 13730

                    
                      eb19396baacb27bcde6912a0ea5aa6419fc16109:2022-03-29:23253
                    
                

Test fail (13/281)

x
+
TestAddons/parallel/Registry (220.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 11.7395ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-xjsl6" [61f2771f-eaa4-4100-b508-09c5d372da15] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009212437s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-mjdrr" [7a267c87-a6bc-4bd3-ad08-5cad03342b2e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01673736s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220329171213-564087 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Non-zero exit: kubectl --context addons-20220329171213-564087 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (44.560187034s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to use a TTY - input is not a terminal or the right kind of file
	If you don't see a command prompt, try pressing enter.
	Error attaching, falling back to logs: 
	pod default/registry-test terminated (Error)

                                                
                                                
** /stderr **
addons_test.go:298: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20220329171213-564087 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:302: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 ip
2022/03/29 17:15:12 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:15:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:15:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:13 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:336: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20220329171213-564087
helpers_test.go:236: (dbg) docker inspect addons-20220329171213-564087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14",
	        "Created": "2022-03-29T17:12:29.544749358Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 565842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:12:29.912545537Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14/hostname",
	        "HostsPath": "/var/lib/docker/containers/661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14/hosts",
	        "LogPath": "/var/lib/docker/containers/661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14/661d9eb93130b05d6f9970f397a08d1fd566a2f95f377dcd1d193202aa451b14-json.log",
	        "Name": "/addons-20220329171213-564087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20220329171213-564087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20220329171213-564087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1a11366afc922c3e596656146e0994a060941b774b90cb2f4efac73ecfa3842f-init/diff:/var/lib/docker/overlay2/9db4e23be625e034f4ded606113a10eac42e47ab03824d2ab674189ac3bfe07b/diff:/var/lib/docker/overlay2/23cb119bfb0f25fd9defc73c170f1edc0bcfc13d6d5cd5613108d72d2020b31c/diff:/var/lib/docker/overlay2/bc76d55655624ec99d26daa97a683f1a970449af5a278430e255d62e3f8b7357/diff:/var/lib/docker/overlay2/ec38188e1f99f15e49cbf2bb0c04cafd5ff241ea7966de30f2b4201c74cb77cb/diff:/var/lib/docker/overlay2/a5d5403dacc48240e9b97d1b8e55974405d1cf196bfcfa0ca32548f269cc1071/diff:/var/lib/docker/overlay2/9b4ccea6c0eb5887c76137ed35db5e0e51cf583e7c5034dcee8dd746f9a5c3bb/diff:/var/lib/docker/overlay2/8938344848e3a72fe363a3ed45041a50457e8ce2a391113dd515f7afd6d909db/diff:/var/lib/docker/overlay2/b6696995e5a26e0378be0861a49fb24498de5c915b3c02bd34ae778e05b48a9d/diff:/var/lib/docker/overlay2/f95310f65d1c113884a9ac4dc0f127daf9d1b3f623762106478e4fe41692cc2d/diff:/var/lib/docker/overlay2/30ef7d
70756fc9f43cfd45ede0c78a5dbd376911f1844027d7dd8448f0d1bd2c/diff:/var/lib/docker/overlay2/aeeca576548699f29ecc5f8389942ed3bfde02e1b481e0e8365142a90064496c/diff:/var/lib/docker/overlay2/5ba2587df64129d8cf8c96c14448186757d9b360c9e3101c4a20b1edd728ce18/diff:/var/lib/docker/overlay2/64d1213878e17d1927644c40bb0d52e6a3a124b5e86daa58f166ee0704d9da9b/diff:/var/lib/docker/overlay2/7ac9b531b4439100cfb4789e5009915d72b467705e391e0d197a760783cb4e4b/diff:/var/lib/docker/overlay2/f6f1442868cd491bc73dc995e7c0b552c0d2843d43327267ee3d015edc11da4e/diff:/var/lib/docker/overlay2/c7c6c9113fac60b95369a3e535649a67c14c4c74da4c7de68bd1aaf14bce0ac3/diff:/var/lib/docker/overlay2/9eba2b84f547941ca647ea1c9eff5275fae385f1b800741ed421672c6437487a/diff:/var/lib/docker/overlay2/8bb3fb7770413b61ccdf84f4a5cccb728206fcecd1f006ca906874d3c5d4481c/diff:/var/lib/docker/overlay2/7ebf161ae3775c9e0f6ebe9e26d40e46766d5f3387c2ea279679d585cbd19866/diff:/var/lib/docker/overlay2/4d1064116e64fbf54de0c8ef70255b6fc77b005725e02a52281bfa0e5de5a7af/diff:/var/lib/d
ocker/overlay2/f82ba82619b078a905b7e5a1466fc8ca89d8664fa04dc61cf5914aa0c34ae177/diff:/var/lib/docker/overlay2/728d17980e4c7c100416d2fd1be83673103f271144543fb61798e4a0303c1d63/diff:/var/lib/docker/overlay2/d7e175c39be427bc2372876df06eb27ba2b10462c347d1ee8e43a957642f2ca5/diff:/var/lib/docker/overlay2/1e872f98bd0c0432c85e2812af12d33dcacc384f762347889c846540583137be/diff:/var/lib/docker/overlay2/f5da27e443a249317e2670de2816cbae827a62edb0e4475ac004418a25e279d8/diff:/var/lib/docker/overlay2/33e17a308b62964f37647c1f62c13733476a7eaadb28f29ad1d1f21b5d0456ee/diff:/var/lib/docker/overlay2/6b6bb10e19be67a77e94bd177e583241953840e08b30d68eca16b63e2c5fd574/diff:/var/lib/docker/overlay2/8e061338d4e4cf068f61861fc08144097ee117189101f3a71f361481dc288fd3/diff:/var/lib/docker/overlay2/27d99a6f864614a9dad7efdece7ace23256ff5489d66daed625285168e2fcc48/diff:/var/lib/docker/overlay2/8642d51376c5c35316cb2d9d5832c7382cb5e0d9df1b766f5187ab10eaafb4d6/diff:/var/lib/docker/overlay2/9ffbd3f47292209200a9ab357ba5f68beb15c82f2511804d74dcf2ad3b4
4155f/diff:/var/lib/docker/overlay2/d2512b29dd494ed5dc05b52800efe6a97b07803c1d3172d6a9d9b0b45a7e19eb/diff:/var/lib/docker/overlay2/7e87858609885bf7a576966de8888d2db30e18d8b582b6f6434176c59d71cca5/diff:/var/lib/docker/overlay2/54e00a6514941a66517f8aa879166fd5e8660f7ab673e554aa927bfcb19a145d/diff:/var/lib/docker/overlay2/02ced31172683ffa2fe2365aa827ef66d364bd100865b9095680e2c79f2e868e/diff:/var/lib/docker/overlay2/e65eba629c5d8828d9a2c4b08b322edb4b07793e8bfb091b93fd15013209a387/diff:/var/lib/docker/overlay2/3ee0fd224e7a66a3d8cc598c64cdaf0436eab7f466aa34e3406a0058e16a7f30/diff:/var/lib/docker/overlay2/29b13dceeebd7568b56f69e176c7d37f5b88fe4c13065f01a6f3a36606d5b62c/diff:/var/lib/docker/overlay2/b10262d215789890fd0056a6e4ff379df5e663524b5b96d9671e10c54adc5a25/diff:/var/lib/docker/overlay2/a292b90c390a4decbdd1887aa58471b2827752df1ef18358a1fb82fd665de0b4/diff:/var/lib/docker/overlay2/fbac86c28573a8fd7399f9fd0a51ebb8eef8158b8264c242aa16e16f6227522f/diff:/var/lib/docker/overlay2/b0ddb339636d56ff9132bc75064a21216c2e71
f3b3b53d4a39f9fe66133219c2/diff:/var/lib/docker/overlay2/9e52af85e3d331425d5757a9bde2ace3e5e12622a0d748e6559c2a74907adaa1/diff:/var/lib/docker/overlay2/e856b1e5a3fe78b31306313bdf9bc42d7b1f45dc864587f3ce5dfd3793cb96d3/diff:/var/lib/docker/overlay2/1fbed3ccb397ff1873888dc253845b880a4d30dda3b181220402f7592d8a3ad7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a11366afc922c3e596656146e0994a060941b774b90cb2f4efac73ecfa3842f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a11366afc922c3e596656146e0994a060941b774b90cb2f4efac73ecfa3842f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a11366afc922c3e596656146e0994a060941b774b90cb2f4efac73ecfa3842f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20220329171213-564087",
	                "Source": "/var/lib/docker/volumes/addons-20220329171213-564087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20220329171213-564087",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20220329171213-564087",
	                "name.minikube.sigs.k8s.io": "addons-20220329171213-564087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06e6a5e611cc2e49404c124b41a7d094a5cfb0c3f462b9211c5356e9ae6dacea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49453"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49451"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/06e6a5e611cc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20220329171213-564087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "661d9eb93130",
	                        "addons-20220329171213-564087"
	                    ],
	                    "NetworkID": "ef493ad930694e4107e6abf90783aebda8743a36077875d1f63230e676a46479",
	                    "EndpointID": "7f874e8fef494fb756f9b7530b10596718da98d60adc2380174824aad6b29411",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20220329171213-564087 -n addons-20220329171213-564087
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 logs -n 25: (1.099916121s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20220329171133-564087   | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:11:48 UTC | Tue, 29 Mar 2022 17:11:49 UTC |
	| delete  | -p                                    | download-only-20220329171133-564087   | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:11:49 UTC | Tue, 29 Mar 2022 17:11:49 UTC |
	|         | download-only-20220329171133-564087   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20220329171133-564087   | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:11:49 UTC | Tue, 29 Mar 2022 17:11:49 UTC |
	|         | download-only-20220329171133-564087   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20220329171149-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:12:12 UTC | Tue, 29 Mar 2022 17:12:12 UTC |
	|         | download-docker-20220329171149-564087 |                                       |         |         |                               |                               |
	| delete  | -p                                    | binary-mirror-20220329171213-564087   | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:12:13 UTC | Tue, 29 Mar 2022 17:12:13 UTC |
	|         | binary-mirror-20220329171213-564087   |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:12:14 UTC | Tue, 29 Mar 2022 17:14:17 UTC |
	|         | addons-20220329171213-564087          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --addons=gcp-auth                     |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=docker            |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=ingress-dns                  |                                       |         |         |                               |                               |
	|         | --addons=helm-tiller                  |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:23 UTC | Tue, 29 Mar 2022 17:14:23 UTC |
	|         | addons disable metrics-server         |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:29 UTC | Tue, 29 Mar 2022 17:14:29 UTC |
	|         | addons disable helm-tiller            |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:38 UTC | Tue, 29 Mar 2022 17:14:39 UTC |
	|         | ssh curl -s http://127.0.0.1/         |                                       |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'          |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:39 UTC | Tue, 29 Mar 2022 17:14:39 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:39 UTC | Tue, 29 Mar 2022 17:14:41 UTC |
	|         | addons disable ingress-dns            |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:14:41 UTC | Tue, 29 Mar 2022 17:14:48 UTC |
	|         | addons disable ingress                |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:15:12 UTC | Tue, 29 Mar 2022 17:15:12 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:15:07 UTC | Tue, 29 Mar 2022 17:15:14 UTC |
	|         | addons disable                        |                                       |         |         |                               |                               |
	|         | csi-hostpath-driver                   |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:15:14 UTC | Tue, 29 Mar 2022 17:15:15 UTC |
	|         | addons disable volumesnapshots        |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20220329171213-564087          | addons-20220329171213-564087          | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:17:56 UTC | Tue, 29 Mar 2022 17:17:56 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:12:14
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:12:14.000716  565190 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:12:14.000862  565190 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:12:14.000873  565190 out.go:310] Setting ErrFile to fd 2...
	I0329 17:12:14.000879  565190 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:12:14.000991  565190 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:12:14.001326  565190 out.go:304] Setting JSON to false
	I0329 17:12:14.002217  565190 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6887,"bootTime":1648567047,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:12:14.002293  565190 start.go:124] virtualization: kvm guest
	I0329 17:12:14.004886  565190 out.go:176] * [addons-20220329171213-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:12:14.006349  565190 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:12:14.005035  565190 notify.go:193] Checking for updates...
	I0329 17:12:14.008080  565190 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:12:14.009458  565190 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:12:14.010790  565190 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:12:14.012089  565190 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:12:14.012382  565190 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:12:14.049099  565190 docker.go:137] docker version: linux-20.10.14
	I0329 17:12:14.049239  565190 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:12:14.134161  565190 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:12:14.0756779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:12:14.134273  565190 docker.go:254] overlay module found
	I0329 17:12:14.136395  565190 out.go:176] * Using the docker driver based on user configuration
	I0329 17:12:14.136437  565190 start.go:283] selected driver: docker
	I0329 17:12:14.136447  565190 start.go:800] validating driver "docker" against <nil>
	I0329 17:12:14.136471  565190 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 17:12:14.136521  565190 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 17:12:14.136545  565190 out.go:241] ! Your cgroup does not allow setting memory.
	I0329 17:12:14.137945  565190 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 17:12:14.138544  565190 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:12:14.224133  565190 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:12:14.166306254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:12:14.224314  565190 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 17:12:14.224497  565190 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 17:12:14.224522  565190 cni.go:93] Creating CNI manager for ""
	I0329 17:12:14.224539  565190 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:12:14.224551  565190 start_flags.go:306] config:
	{Name:addons-20220329171213-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:addons-20220329171213-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:12:14.227003  565190 out.go:176] * Starting control plane node addons-20220329171213-564087 in cluster addons-20220329171213-564087
	I0329 17:12:14.227045  565190 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:12:14.228762  565190 out.go:176] * Pulling base image ...
	I0329 17:12:14.228790  565190 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:12:14.228819  565190 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:12:14.228857  565190 cache.go:57] Caching tarball of preloaded images
	I0329 17:12:14.228878  565190 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:12:14.229152  565190 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 17:12:14.229187  565190 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:12:14.229550  565190 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/config.json ...
	I0329 17:12:14.229581  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/config.json: {Name:mk06192b51c02b30f297342c72ee9ce8688e5326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:14.272440  565190 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:12:14.272471  565190 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:12:14.272489  565190 cache.go:208] Successfully downloaded all kic artifacts
	I0329 17:12:14.272530  565190 start.go:348] acquiring machines lock for addons-20220329171213-564087: {Name:mk196d1f7532ca00e8e3df1f55114c04d1e4c1cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 17:12:14.272664  565190 start.go:352] acquired machines lock for "addons-20220329171213-564087" in 112.694µs
	I0329 17:12:14.272687  565190 start.go:90] Provisioning new machine with config: &{Name:addons-20220329171213-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:addons-20220329171213-564087 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:12:14.272803  565190 start.go:127] createHost starting for "" (driver="docker")
	I0329 17:12:14.275955  565190 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0329 17:12:14.276250  565190 start.go:161] libmachine.API.Create for "addons-20220329171213-564087" (driver="docker")
	I0329 17:12:14.276316  565190 client.go:168] LocalClient.Create starting
	I0329 17:12:14.276435  565190 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 17:12:14.407838  565190 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 17:12:14.529050  565190 cli_runner.go:133] Run: docker network inspect addons-20220329171213-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 17:12:14.558373  565190 cli_runner.go:180] docker network inspect addons-20220329171213-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 17:12:14.558444  565190 network_create.go:262] running [docker network inspect addons-20220329171213-564087] to gather additional debugging logs...
	I0329 17:12:14.558464  565190 cli_runner.go:133] Run: docker network inspect addons-20220329171213-564087
	W0329 17:12:14.586949  565190 cli_runner.go:180] docker network inspect addons-20220329171213-564087 returned with exit code 1
	I0329 17:12:14.586979  565190 network_create.go:265] error running [docker network inspect addons-20220329171213-564087]: docker network inspect addons-20220329171213-564087: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220329171213-564087
	I0329 17:12:14.586994  565190 network_create.go:267] output of [docker network inspect addons-20220329171213-564087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220329171213-564087
	
	** /stderr **
	I0329 17:12:14.587036  565190 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:12:14.616737  565190 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010600] misses:0}
	I0329 17:12:14.616795  565190 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 17:12:14.616814  565190 network_create.go:114] attempt to create docker network addons-20220329171213-564087 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 17:12:14.616863  565190 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220329171213-564087
	I0329 17:12:14.677488  565190 network_create.go:98] docker network addons-20220329171213-564087 192.168.49.0/24 created
	I0329 17:12:14.677523  565190 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20220329171213-564087" container
	I0329 17:12:14.677584  565190 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 17:12:14.706376  565190 cli_runner.go:133] Run: docker volume create addons-20220329171213-564087 --label name.minikube.sigs.k8s.io=addons-20220329171213-564087 --label created_by.minikube.sigs.k8s.io=true
	I0329 17:12:14.735670  565190 oci.go:102] Successfully created a docker volume addons-20220329171213-564087
	I0329 17:12:14.735745  565190 cli_runner.go:133] Run: docker run --rm --name addons-20220329171213-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220329171213-564087 --entrypoint /usr/bin/test -v addons-20220329171213-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 17:12:21.217841  565190 cli_runner.go:186] Completed: docker run --rm --name addons-20220329171213-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220329171213-564087 --entrypoint /usr/bin/test -v addons-20220329171213-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (6.482055764s)
	I0329 17:12:21.217914  565190 oci.go:106] Successfully prepared a docker volume addons-20220329171213-564087
	I0329 17:12:21.217959  565190 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:12:21.217985  565190 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 17:12:21.218070  565190 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220329171213-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 17:12:29.432859  565190 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220329171213-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.214742034s)
	I0329 17:12:29.432901  565190 kic.go:188] duration metric: took 8.214908 seconds to extract preloaded images to volume
	W0329 17:12:29.432952  565190 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 17:12:29.432964  565190 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 17:12:29.433070  565190 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 17:12:29.516334  565190 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220329171213-564087 --name addons-20220329171213-564087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220329171213-564087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220329171213-564087 --network addons-20220329171213-564087 --ip 192.168.49.2 --volume addons-20220329171213-564087:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 17:12:29.921751  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Running}}
	I0329 17:12:29.956353  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:29.986889  565190 cli_runner.go:133] Run: docker exec addons-20220329171213-564087 stat /var/lib/dpkg/alternatives/iptables
	I0329 17:12:30.045629  565190 oci.go:278] the created container "addons-20220329171213-564087" has a running status.
	I0329 17:12:30.045659  565190 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa...
	I0329 17:12:30.132044  565190 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 17:12:30.213363  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:30.248963  565190 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 17:12:30.248988  565190 kic_runner.go:114] Args: [docker exec --privileged addons-20220329171213-564087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 17:12:30.332691  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:30.364600  565190 machine.go:88] provisioning docker machine ...
	I0329 17:12:30.364672  565190 ubuntu.go:169] provisioning hostname "addons-20220329171213-564087"
	I0329 17:12:30.364769  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:30.396559  565190 main.go:130] libmachine: Using SSH client type: native
	I0329 17:12:30.396845  565190 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49454 <nil> <nil>}
	I0329 17:12:30.396880  565190 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20220329171213-564087 && echo "addons-20220329171213-564087" | sudo tee /etc/hostname
	I0329 17:12:30.529078  565190 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20220329171213-564087
	
	I0329 17:12:30.529152  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:30.559949  565190 main.go:130] libmachine: Using SSH client type: native
	I0329 17:12:30.560097  565190 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49454 <nil> <nil>}
	I0329 17:12:30.560116  565190 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20220329171213-564087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20220329171213-564087/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20220329171213-564087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 17:12:30.676790  565190 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 17:12:30.676825  565190 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 17:12:30.676848  565190 ubuntu.go:177] setting up certificates
	I0329 17:12:30.676858  565190 provision.go:83] configureAuth start
	I0329 17:12:30.676904  565190 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220329171213-564087
	I0329 17:12:30.706079  565190 provision.go:138] copyHostCerts
	I0329 17:12:30.706151  565190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 17:12:30.706242  565190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 17:12:30.706302  565190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 17:12:30.706342  565190 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.addons-20220329171213-564087 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20220329171213-564087]
	I0329 17:12:30.817223  565190 provision.go:172] copyRemoteCerts
	I0329 17:12:30.817298  565190 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 17:12:30.817334  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:30.847813  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:30.932323  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 17:12:30.949024  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0329 17:12:30.965530  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 17:12:30.981693  565190 provision.go:86] duration metric: configureAuth took 304.821376ms
	I0329 17:12:30.981725  565190 ubuntu.go:193] setting minikube options for container-runtime
	I0329 17:12:30.981883  565190 config.go:176] Loaded profile config "addons-20220329171213-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:12:30.981935  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:31.012401  565190 main.go:130] libmachine: Using SSH client type: native
	I0329 17:12:31.012574  565190 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49454 <nil> <nil>}
	I0329 17:12:31.012592  565190 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 17:12:31.129041  565190 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 17:12:31.129081  565190 ubuntu.go:71] root file system type: overlay
	I0329 17:12:31.129286  565190 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 17:12:31.129356  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:31.159143  565190 main.go:130] libmachine: Using SSH client type: native
	I0329 17:12:31.159294  565190 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49454 <nil> <nil>}
	I0329 17:12:31.159359  565190 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 17:12:31.285461  565190 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 17:12:31.285540  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:31.314928  565190 main.go:130] libmachine: Using SSH client type: native
	I0329 17:12:31.315102  565190 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49454 <nil> <nil>}
	I0329 17:12:31.315138  565190 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 17:12:31.932010  565190 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 17:12:31.278518685 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 17:12:31.932049  565190 machine.go:91] provisioned docker machine in 1.567415852s
	I0329 17:12:31.932064  565190 client.go:171] LocalClient.Create took 17.655737556s
	I0329 17:12:31.932077  565190 start.go:169] duration metric: libmachine.API.Create for "addons-20220329171213-564087" took 17.655827581s
	I0329 17:12:31.932091  565190 start.go:302] post-start starting for "addons-20220329171213-564087" (driver="docker")
	I0329 17:12:31.932101  565190 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 17:12:31.932219  565190 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 17:12:31.932295  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:31.962023  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:32.048589  565190 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 17:12:32.051242  565190 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 17:12:32.051265  565190 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 17:12:32.051278  565190 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 17:12:32.051291  565190 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 17:12:32.051307  565190 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 17:12:32.051383  565190 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 17:12:32.051414  565190 start.go:305] post-start completed in 119.312396ms
	I0329 17:12:32.051757  565190 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220329171213-564087
	I0329 17:12:32.081473  565190 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/config.json ...
	I0329 17:12:32.081746  565190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:12:32.081793  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:32.111209  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:32.193524  565190 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 17:12:32.197254  565190 start.go:130] duration metric: createHost completed in 17.924438297s
	I0329 17:12:32.197278  565190 start.go:81] releasing machines lock for "addons-20220329171213-564087", held for 17.924602786s
	I0329 17:12:32.197362  565190 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220329171213-564087
	I0329 17:12:32.226982  565190 ssh_runner.go:195] Run: systemctl --version
	I0329 17:12:32.227037  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:32.227103  565190 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 17:12:32.227161  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:32.257879  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:32.258446  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:32.484455  565190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 17:12:32.493823  565190 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:12:32.502535  565190 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 17:12:32.502603  565190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 17:12:32.511376  565190 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 17:12:32.523171  565190 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 17:12:32.596048  565190 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 17:12:32.668248  565190 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:12:32.677367  565190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 17:12:32.752735  565190 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 17:12:32.761736  565190 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:12:32.799688  565190 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:12:32.840024  565190 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 17:12:32.840101  565190 cli_runner.go:133] Run: docker network inspect addons-20220329171213-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:12:32.869205  565190 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0329 17:12:32.872405  565190 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:12:32.881792  565190 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:12:32.881851  565190 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:12:32.913423  565190 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:12:32.913448  565190 docker.go:537] Images already preloaded, skipping extraction
	I0329 17:12:32.913495  565190 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:12:32.944865  565190 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:12:32.944891  565190 cache_images.go:84] Images are preloaded, skipping loading
	I0329 17:12:32.944945  565190 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 17:12:33.024449  565190 cni.go:93] Creating CNI manager for ""
	I0329 17:12:33.024469  565190 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:12:33.024478  565190 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 17:12:33.024493  565190 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20220329171213-564087 NodeName:addons-20220329171213-564087 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 17:12:33.024634  565190 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20220329171213-564087"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 17:12:33.024710  565190 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20220329171213-564087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:addons-20220329171213-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 17:12:33.024756  565190 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 17:12:33.032167  565190 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 17:12:33.032272  565190 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 17:12:33.038982  565190 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0329 17:12:33.051745  565190 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 17:12:33.064203  565190 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0329 17:12:33.076349  565190 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 17:12:33.079063  565190 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:12:33.087653  565190 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087 for IP: 192.168.49.2
	I0329 17:12:33.087697  565190 certs.go:187] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 17:12:33.226358  565190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt ...
	I0329 17:12:33.226396  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt: {Name:mkdf2d1f6f6986df5bd8da19dcfa7cf053956c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.226622  565190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key ...
	I0329 17:12:33.226641  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key: {Name:mk6ac3b959dcbd37bc44ba45f04dc08d70aa4664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.226763  565190 certs.go:187] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 17:12:33.509322  565190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt ...
	I0329 17:12:33.509364  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt: {Name:mk22af60a6ac674b4c878214447f533d10db6435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.509558  565190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key ...
	I0329 17:12:33.509570  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key: {Name:mkd187f079076315b658423938007d1ca10d95c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.509714  565190 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.key
	I0329 17:12:33.509730  565190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt with IP's: []
	I0329 17:12:33.867688  565190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt ...
	I0329 17:12:33.867727  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: {Name:mk0adee17fef5de2fec2a2b7f8c70aa144f2615b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.867928  565190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.key ...
	I0329 17:12:33.867944  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.key: {Name:mkf61611cfb87380911a347d26c0f9904344dffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.868055  565190 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key.dd3b5fb2
	I0329 17:12:33.868072  565190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 17:12:33.983000  565190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt.dd3b5fb2 ...
	I0329 17:12:33.983045  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt.dd3b5fb2: {Name:mkbbdb00e5a60f847f8aafa06e382098d85efc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.983255  565190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key.dd3b5fb2 ...
	I0329 17:12:33.983270  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key.dd3b5fb2: {Name:mk915c80cac811cb93b44c26739e16526076311b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:33.983346  565190 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt
	I0329 17:12:33.983402  565190 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key
	I0329 17:12:33.983448  565190 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.key
	I0329 17:12:33.983465  565190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.crt with IP's: []
	I0329 17:12:34.309343  565190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.crt ...
	I0329 17:12:34.309382  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.crt: {Name:mk0d2fe7c7e8cf05910c1c8b8df6cc96a1d35827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:34.309586  565190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.key ...
	I0329 17:12:34.309602  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.key: {Name:mk8b4d1f1e9040ae07168502e750c62b1f2e46d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:34.309782  565190 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 17:12:34.309822  565190 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 17:12:34.309846  565190 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 17:12:34.309869  565190 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 17:12:34.310399  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 17:12:34.328334  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 17:12:34.344762  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 17:12:34.361504  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0329 17:12:34.378484  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 17:12:34.395121  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 17:12:34.411490  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 17:12:34.428000  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 17:12:34.444437  565190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 17:12:34.460905  565190 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 17:12:34.472945  565190 ssh_runner.go:195] Run: openssl version
	I0329 17:12:34.478462  565190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 17:12:34.485392  565190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:12:34.488206  565190 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:12:34.488254  565190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:12:34.492812  565190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 17:12:34.499620  565190 kubeadm.go:391] StartCluster: {Name:addons-20220329171213-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:addons-20220329171213-564087 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se}
	I0329 17:12:34.499733  565190 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 17:12:34.529934  565190 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 17:12:34.536828  565190 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 17:12:34.543390  565190 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 17:12:34.543445  565190 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 17:12:34.550051  565190 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 17:12:34.550093  565190 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 17:12:35.028416  565190 out.go:203]   - Generating certificates and keys ...
	I0329 17:12:37.392249  565190 out.go:203]   - Booting up control plane ...
	I0329 17:12:45.428426  565190 out.go:203]   - Configuring RBAC rules ...
	I0329 17:12:45.853749  565190 cni.go:93] Creating CNI manager for ""
	I0329 17:12:45.853782  565190 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:12:45.853821  565190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 17:12:45.853891  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:45.853938  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=addons-20220329171213-564087 minikube.k8s.io/updated_at=2022_03_29T17_12_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:45.862735  565190 ops.go:34] apiserver oom_adj: -16
	I0329 17:12:46.262986  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:46.814950  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:47.314419  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:47.814616  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:48.314985  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:48.815010  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:49.314885  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:49.814702  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:50.315107  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:50.814608  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:51.314788  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:51.814705  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:52.314717  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:52.815349  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:53.314478  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:53.814710  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:54.314815  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:54.814994  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:55.314451  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:55.814336  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:56.314713  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:56.814649  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:57.315258  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:57.814994  565190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:12:57.867531  565190 kubeadm.go:1020] duration metric: took 12.013694233s to wait for elevateKubeSystemPrivileges.
	I0329 17:12:57.867566  565190 kubeadm.go:393] StartCluster complete in 23.367954675s
	I0329 17:12:57.867587  565190 settings.go:142] acquiring lock: {Name:mkf193dd78851319876bf7c47a47f525125a4fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:57.867726  565190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:12:57.868074  565190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mke8ff89e3fadc84c0cca24c5855d2fcb9124f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:12:58.383129  565190 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20220329171213-564087" rescaled to 1
	I0329 17:12:58.383212  565190 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:12:58.383253  565190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 17:12:58.383281  565190 addons.go:415] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver gcp-auth ingress ingress-dns helm-tiller]
	I0329 17:12:58.385247  565190 out.go:176] * Verifying Kubernetes components...
	I0329 17:12:58.385301  565190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:12:58.383369  565190 addons.go:65] Setting volumesnapshots=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.385349  565190 addons.go:153] Setting addon volumesnapshots=true in "addons-20220329171213-564087"
	I0329 17:12:58.383390  565190 addons.go:65] Setting helm-tiller=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383399  565190 addons.go:65] Setting ingress=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383407  565190 addons.go:65] Setting ingress-dns=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383413  565190 addons.go:65] Setting metrics-server=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383421  565190 addons.go:65] Setting olm=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383429  565190 addons.go:65] Setting storage-provisioner=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383439  565190 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383443  565190 addons.go:65] Setting registry=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383427  565190 addons.go:65] Setting gcp-auth=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383494  565190 addons.go:65] Setting default-storageclass=true in profile "addons-20220329171213-564087"
	I0329 17:12:58.383510  565190 config.go:176] Loaded profile config "addons-20220329171213-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:12:58.385438  565190 addons.go:153] Setting addon olm=true in "addons-20220329171213-564087"
	I0329 17:12:58.385464  565190 addons.go:153] Setting addon helm-tiller=true in "addons-20220329171213-564087"
	I0329 17:12:58.385476  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.385488  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.385481  565190 addons.go:153] Setting addon ingress-dns=true in "addons-20220329171213-564087"
	I0329 17:12:58.385516  565190 mustload.go:65] Loading cluster: addons-20220329171213-564087
	I0329 17:12:58.385576  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.385597  565190 addons.go:153] Setting addon metrics-server=true in "addons-20220329171213-564087"
	I0329 17:12:58.385630  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.385682  565190 config.go:176] Loaded profile config "addons-20220329171213-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:12:58.386008  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386032  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386036  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386078  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386106  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.385489  565190 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20220329171213-564087"
	I0329 17:12:58.386142  565190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20220329171213-564087"
	I0329 17:12:58.385501  565190 addons.go:153] Setting addon registry=true in "addons-20220329171213-564087"
	I0329 17:12:58.386169  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.386188  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.385446  565190 addons.go:153] Setting addon ingress=true in "addons-20220329171213-564087"
	I0329 17:12:58.386222  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.386402  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386584  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386587  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.385444  565190 addons.go:153] Setting addon storage-provisioner=true in "addons-20220329171213-564087"
	W0329 17:12:58.386659  565190 addons.go:165] addon storage-provisioner should already be in state true
	I0329 17:12:58.386661  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.386681  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.386757  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.387138  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.387217  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.459874  565190 addons.go:153] Setting addon default-storageclass=true in "addons-20220329171213-564087"
	W0329 17:12:58.459907  565190 addons.go:165] addon default-storageclass should already be in state true
	I0329 17:12:58.459942  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.460423  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:58.491519  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0329 17:12:58.491616  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0329 17:12:58.491631  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0329 17:12:58.491686  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.504947  565190 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0329 17:12:58.503912  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:58.506311  565190 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.1.1
	I0329 17:12:58.507898  565190 out.go:176]   - Using image registry:2.7.1
	I0329 17:12:58.509511  565190 out.go:176]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0329 17:12:58.516572  565190 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0329 17:12:58.516882  565190 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0329 17:12:58.516896  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15567 bytes)
	I0329 17:12:58.516949  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.517035  565190 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:12:58.517124  565190 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:12:58.517132  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 17:12:58.517161  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.517262  565190 addons.go:348] installing /etc/kubernetes/addons/registry-rc.yaml
	I0329 17:12:58.517267  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0329 17:12:58.517290  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.525608  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0329 17:12:58.519195  565190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 17:12:58.519896  565190 node_ready.go:35] waiting up to 6m0s for node "addons-20220329171213-564087" to be "Ready" ...
	I0329 17:12:58.528026  565190 out.go:176]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0329 17:12:58.529518  565190 out.go:176]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1
	I0329 17:12:58.529579  565190 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0329 17:12:58.529589  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0329 17:12:58.527881  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0329 17:12:58.529637  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.528217  565190 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0329 17:12:58.531068  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0329 17:12:58.531122  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.531176  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0329 17:12:58.532506  565190 out.go:176]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0329 17:12:58.532614  565190 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0329 17:12:58.532632  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0329 17:12:58.532681  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.533945  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0329 17:12:58.535633  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0329 17:12:58.533667  565190 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0329 17:12:58.536916  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0329 17:12:58.536961  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.538127  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0329 17:12:58.539362  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0329 17:12:58.540666  565190 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0329 17:12:58.542707  565190 out.go:176]   - Using image quay.io/operator-framework/olm
	I0329 17:12:58.540737  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0329 17:12:58.545634  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0329 17:12:58.545703  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.545795  565190 out.go:176]   - Using image quay.io/operatorhubio/catalog
	I0329 17:12:58.566305  565190 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 17:12:58.566332  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 17:12:58.566402  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.567380  565190 node_ready.go:49] node "addons-20220329171213-564087" has status "Ready":"True"
	I0329 17:12:58.567409  565190 node_ready.go:38] duration metric: took 39.513668ms waiting for node "addons-20220329171213-564087" to be "Ready" ...
	I0329 17:12:58.567422  565190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:12:58.598264  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.598909  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.599205  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.602549  565190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-hzs4s" in "kube-system" namespace to be "Ready" ...
	I0329 17:12:58.607280  565190 addons.go:348] installing /etc/kubernetes/addons/crds.yaml
	I0329 17:12:58.607370  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
	I0329 17:12:58.607467  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:58.608389  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.631787  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.636444  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.655436  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.657636  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.661722  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.661887  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.683774  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:58.866684  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:12:58.867697  565190 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0329 17:12:58.867727  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0329 17:12:58.868521  565190 addons.go:348] installing /etc/kubernetes/addons/registry-svc.yaml
	I0329 17:12:58.868545  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0329 17:12:58.870424  565190 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0329 17:12:58.870446  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0329 17:12:58.967032  565190 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0329 17:12:58.967068  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0329 17:12:59.046735  565190 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0329 17:12:59.046814  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0329 17:12:59.050098  565190 addons.go:348] installing /etc/kubernetes/addons/olm.yaml
	I0329 17:12:59.050128  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/olm.yaml (9994 bytes)
	I0329 17:12:59.052883  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0329 17:12:59.052922  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0329 17:12:59.061110  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0329 17:12:59.065511  565190 addons.go:348] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0329 17:12:59.065539  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0329 17:12:59.066394  565190 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0329 17:12:59.066418  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0329 17:12:59.163537  565190 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0329 17:12:59.166178  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0329 17:12:59.168079  565190 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0329 17:12:59.168146  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0329 17:12:59.248722  565190 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0329 17:12:59.248755  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0329 17:12:59.253323  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0329 17:12:59.253609  565190 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0329 17:12:59.253632  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0329 17:12:59.344769  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 17:12:59.350520  565190 addons.go:153] Setting addon gcp-auth=true in "addons-20220329171213-564087"
	I0329 17:12:59.350590  565190 host.go:66] Checking if "addons-20220329171213-564087" exists ...
	I0329 17:12:59.351159  565190 cli_runner.go:133] Run: docker container inspect addons-20220329171213-564087 --format={{.State.Status}}
	I0329 17:12:59.352071  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0329 17:12:59.364187  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0329 17:12:59.364223  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0329 17:12:59.391961  565190 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0329 17:12:59.392028  565190 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220329171213-564087
	I0329 17:12:59.426491  565190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49454 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/addons-20220329171213-564087/id_rsa Username:docker}
	I0329 17:12:59.445332  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0329 17:12:59.445364  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0329 17:12:59.445539  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0329 17:12:59.468472  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0329 17:12:59.546050  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0329 17:12:59.546086  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0329 17:12:59.560092  565190 addons.go:348] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0329 17:12:59.560123  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0329 17:12:59.655236  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0329 17:12:59.655331  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0329 17:12:59.862400  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0329 17:12:59.948764  565190 addons.go:348] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0329 17:12:59.948797  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0329 17:13:00.050384  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0329 17:13:00.050415  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0329 17:13:00.150857  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0329 17:13:00.150885  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0329 17:13:00.172156  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0329 17:13:00.172182  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0329 17:13:00.185227  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0329 17:13:00.185258  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0329 17:13:00.261848  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0329 17:13:00.261877  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0329 17:13:00.276995  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0329 17:13:00.277025  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0329 17:13:00.358319  565190 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0329 17:13:00.358352  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0329 17:13:00.458061  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0329 17:13:00.652390  565190 pod_ready.go:102] pod "coredns-64897985d-hzs4s" in "kube-system" namespace has status "Ready":"False"
	I0329 17:13:01.656031  565190 pod_ready.go:92] pod "coredns-64897985d-hzs4s" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:01.656134  565190 pod_ready.go:81] duration metric: took 3.053557606s waiting for pod "coredns-64897985d-hzs4s" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.656205  565190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-kmjlr" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.664255  565190 pod_ready.go:92] pod "coredns-64897985d-kmjlr" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:01.664356  565190 pod_ready.go:81] duration metric: took 8.119427ms waiting for pod "coredns-64897985d-kmjlr" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.664389  565190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.848441  565190 pod_ready.go:92] pod "etcd-addons-20220329171213-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:01.848470  565190 pod_ready.go:81] duration metric: took 184.029295ms waiting for pod "etcd-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.848485  565190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.945526  565190 pod_ready.go:92] pod "kube-apiserver-addons-20220329171213-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:01.945569  565190 pod_ready.go:81] duration metric: took 97.07563ms waiting for pod "kube-apiserver-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.945585  565190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.951978  565190 pod_ready.go:92] pod "kube-controller-manager-addons-20220329171213-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:01.952012  565190 pod_ready.go:81] duration metric: took 6.409289ms waiting for pod "kube-controller-manager-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:01.952026  565190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9mftv" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:02.051544  565190 pod_ready.go:92] pod "kube-proxy-9mftv" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:02.051575  565190 pod_ready.go:81] duration metric: took 99.538388ms waiting for pod "kube-proxy-9mftv" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:02.051591  565190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:02.357456  565190 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.831734116s)
	I0329 17:13:02.357487  565190 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0329 17:13:02.449098  565190 pod_ready.go:92] pod "kube-scheduler-addons-20220329171213-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:13:02.449180  565190 pod_ready.go:81] duration metric: took 397.578692ms waiting for pod "kube-scheduler-addons-20220329171213-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:13:02.449206  565190 pod_ready.go:38] duration metric: took 3.881753169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:13:02.449271  565190 api_server.go:51] waiting for apiserver process to appear ...
	I0329 17:13:02.449358  565190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 17:13:02.865797  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.999067396s)
	I0329 17:13:04.061184  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.000028934s)
	I0329 17:13:04.061275  565190 addons.go:386] Verifying addon ingress=true in "addons-20220329171213-564087"
	I0329 17:13:04.061296  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.89508542s)
	I0329 17:13:04.062975  565190 out.go:176] * Verifying ingress addon...
	I0329 17:13:04.064218  565190 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0329 17:13:04.067977  565190 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0329 17:13:04.068068  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:04.647303  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:05.158656  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:05.650087  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:06.145672  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.800860867s)
	I0329 17:13:06.145789  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.892432056s)
	I0329 17:13:06.145882  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.793758698s)
	W0329 17:13:06.145921  565190 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0329 17:13:06.145960  565190 addons.go:386] Verifying addon registry=true in "addons-20220329171213-564087"
	I0329 17:13:06.146053  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.700491208s)
	I0329 17:13:06.146077  565190 addons.go:386] Verifying addon metrics-server=true in "addons-20220329171213-564087"
	I0329 17:13:06.146143  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.677642531s)
	I0329 17:13:06.146181  565190 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0329 17:13:06.146204  565190 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.754213829s)
	I0329 17:13:06.146316  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.283864852s)
	I0329 17:13:06.148453  565190 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	W0329 17:13:06.146341  565190 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0329 17:13:06.146544  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.688437661s)
	I0329 17:13:06.146575  565190 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.697185324s)
	I0329 17:13:06.151612  565190 out.go:176] * Verifying registry addon...
	I0329 17:13:06.150128  565190 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0329 17:13:06.150144  565190 out.go:176]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.8
	I0329 17:13:06.152324  565190 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0329 17:13:06.152344  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0329 17:13:06.150199  565190 api_server.go:71] duration metric: took 7.766945114s to wait for apiserver process to appear ...
	I0329 17:13:06.152462  565190 api_server.go:87] waiting for apiserver healthz status ...
	I0329 17:13:06.152507  565190 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0329 17:13:06.152743  565190 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0329 17:13:06.150220  565190 addons.go:386] Verifying addon csi-hostpath-driver=true in "addons-20220329171213-564087"
	I0329 17:13:06.154896  565190 out.go:176] * Verifying csi-hostpath-driver addon...
	I0329 17:13:06.156103  565190 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0329 17:13:06.161448  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:06.166840  565190 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0329 17:13:06.166867  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:06.167343  565190 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0329 17:13:06.167936  565190 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0329 17:13:06.167954  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:06.168411  565190 api_server.go:140] control plane version: v1.23.5
	I0329 17:13:06.168439  565190 api_server.go:130] duration metric: took 15.969512ms to wait for apiserver health ...
	I0329 17:13:06.168451  565190 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 17:13:06.253631  565190 system_pods.go:59] 20 kube-system pods found
	I0329 17:13:06.253676  565190 system_pods.go:61] "coredns-64897985d-hzs4s" [0c0422f3-4912-4cf7-9395-0c5d95a95ee9] Running
	I0329 17:13:06.253684  565190 system_pods.go:61] "coredns-64897985d-kmjlr" [d3ec0634-4dba-41a3-a62d-0734fff4e617] Running
	I0329 17:13:06.253690  565190 system_pods.go:61] "csi-hostpath-attacher-0" [dcc00a35-1a73-4331-9f21-7a51d40a48f8] Pending
	I0329 17:13:06.253696  565190 system_pods.go:61] "csi-hostpath-provisioner-0" [9908636c-20ce-460a-b0a3-72c5116c534e] Pending
	I0329 17:13:06.253702  565190 system_pods.go:61] "csi-hostpath-resizer-0" [50f2e22b-924d-4cea-bf84-f8640cb11189] Pending
	I0329 17:13:06.253707  565190 system_pods.go:61] "csi-hostpath-snapshotter-0" [5c5ff940-326c-4ea6-8ed3-591bd51044c3] Pending
	I0329 17:13:06.253714  565190 system_pods.go:61] "csi-hostpathplugin-0" [9a74ed1c-3834-4670-b15a-981ab50257f0] Pending
	I0329 17:13:06.253727  565190 system_pods.go:61] "etcd-addons-20220329171213-564087" [0cd5a803-86d3-4e58-80ae-0b74074c5046] Running
	I0329 17:13:06.253734  565190 system_pods.go:61] "kube-apiserver-addons-20220329171213-564087" [a8725e84-0f0e-44a9-8bd8-da7ca6f2a783] Running
	I0329 17:13:06.253748  565190 system_pods.go:61] "kube-controller-manager-addons-20220329171213-564087" [d77ef192-2cb2-488b-8666-33662a22264a] Running
	I0329 17:13:06.253759  565190 system_pods.go:61] "kube-ingress-dns-minikube" [eca0b31f-b1e7-44eb-9769-370009b2eba6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0329 17:13:06.253771  565190 system_pods.go:61] "kube-proxy-9mftv" [45418b9a-ada6-4dfc-bbf9-0f2284c58716] Running
	I0329 17:13:06.253778  565190 system_pods.go:61] "kube-scheduler-addons-20220329171213-564087" [41f536b6-edaf-4479-ab85-d8e4e72b0670] Running
	I0329 17:13:06.253793  565190 system_pods.go:61] "metrics-server-bd6f4dd56-p8wkj" [1e19f7ab-ea18-4744-b113-621f8315913e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0329 17:13:06.253807  565190 system_pods.go:61] "registry-proxy-mjdrr" [7a267c87-a6bc-4bd3-ad08-5cad03342b2e] Pending
	I0329 17:13:06.253816  565190 system_pods.go:61] "registry-xjsl6" [61f2771f-eaa4-4100-b508-09c5d372da15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0329 17:13:06.253822  565190 system_pods.go:61] "snapshot-controller-7f76975c56-dqxv8" [b43021b3-b3a6-4d14-b527-8d626bf12f99] Pending
	I0329 17:13:06.253830  565190 system_pods.go:61] "snapshot-controller-7f76975c56-g9j78" [5b4b2f2a-733b-4bfe-9d7b-708b0a956639] Pending
	I0329 17:13:06.253838  565190 system_pods.go:61] "storage-provisioner" [84c52102-344e-43ab-b789-910fc3ac3ef0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0329 17:13:06.253854  565190 system_pods.go:61] "tiller-deploy-6d67d5465d-rqh52" [aa5d19e1-1c65-4e83-b011-cd5b8bc775cc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0329 17:13:06.253871  565190 system_pods.go:74] duration metric: took 85.412909ms to wait for pod list to return data ...
	I0329 17:13:06.253887  565190 default_sa.go:34] waiting for default service account to be created ...
	I0329 17:13:06.256739  565190 default_sa.go:45] found service account: "default"
	I0329 17:13:06.256916  565190 default_sa.go:55] duration metric: took 3.007043ms for default service account to be created ...
	I0329 17:13:06.256943  565190 system_pods.go:116] waiting for k8s-apps to be running ...
	I0329 17:13:06.270692  565190 system_pods.go:86] 20 kube-system pods found
	I0329 17:13:06.270729  565190 system_pods.go:89] "coredns-64897985d-hzs4s" [0c0422f3-4912-4cf7-9395-0c5d95a95ee9] Running
	I0329 17:13:06.270738  565190 system_pods.go:89] "coredns-64897985d-kmjlr" [d3ec0634-4dba-41a3-a62d-0734fff4e617] Running
	I0329 17:13:06.270743  565190 system_pods.go:89] "csi-hostpath-attacher-0" [dcc00a35-1a73-4331-9f21-7a51d40a48f8] Pending
	I0329 17:13:06.270749  565190 system_pods.go:89] "csi-hostpath-provisioner-0" [9908636c-20ce-460a-b0a3-72c5116c534e] Pending
	I0329 17:13:06.270754  565190 system_pods.go:89] "csi-hostpath-resizer-0" [50f2e22b-924d-4cea-bf84-f8640cb11189] Pending
	I0329 17:13:06.270760  565190 system_pods.go:89] "csi-hostpath-snapshotter-0" [5c5ff940-326c-4ea6-8ed3-591bd51044c3] Pending
	I0329 17:13:06.270765  565190 system_pods.go:89] "csi-hostpathplugin-0" [9a74ed1c-3834-4670-b15a-981ab50257f0] Pending
	I0329 17:13:06.270772  565190 system_pods.go:89] "etcd-addons-20220329171213-564087" [0cd5a803-86d3-4e58-80ae-0b74074c5046] Running
	I0329 17:13:06.270790  565190 system_pods.go:89] "kube-apiserver-addons-20220329171213-564087" [a8725e84-0f0e-44a9-8bd8-da7ca6f2a783] Running
	I0329 17:13:06.270797  565190 system_pods.go:89] "kube-controller-manager-addons-20220329171213-564087" [d77ef192-2cb2-488b-8666-33662a22264a] Running
	I0329 17:13:06.270809  565190 system_pods.go:89] "kube-ingress-dns-minikube" [eca0b31f-b1e7-44eb-9769-370009b2eba6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0329 17:13:06.270816  565190 system_pods.go:89] "kube-proxy-9mftv" [45418b9a-ada6-4dfc-bbf9-0f2284c58716] Running
	I0329 17:13:06.270823  565190 system_pods.go:89] "kube-scheduler-addons-20220329171213-564087" [41f536b6-edaf-4479-ab85-d8e4e72b0670] Running
	I0329 17:13:06.270832  565190 system_pods.go:89] "metrics-server-bd6f4dd56-p8wkj" [1e19f7ab-ea18-4744-b113-621f8315913e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0329 17:13:06.270845  565190 system_pods.go:89] "registry-proxy-mjdrr" [7a267c87-a6bc-4bd3-ad08-5cad03342b2e] Pending
	I0329 17:13:06.270856  565190 system_pods.go:89] "registry-xjsl6" [61f2771f-eaa4-4100-b508-09c5d372da15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0329 17:13:06.270867  565190 system_pods.go:89] "snapshot-controller-7f76975c56-dqxv8" [b43021b3-b3a6-4d14-b527-8d626bf12f99] Pending
	I0329 17:13:06.270881  565190 system_pods.go:89] "snapshot-controller-7f76975c56-g9j78" [5b4b2f2a-733b-4bfe-9d7b-708b0a956639] Pending
	I0329 17:13:06.270894  565190 system_pods.go:89] "storage-provisioner" [84c52102-344e-43ab-b789-910fc3ac3ef0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0329 17:13:06.270908  565190 system_pods.go:89] "tiller-deploy-6d67d5465d-rqh52" [aa5d19e1-1c65-4e83-b011-cd5b8bc775cc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0329 17:13:06.270924  565190 system_pods.go:126] duration metric: took 13.96499ms to wait for k8s-apps to be running ...
	I0329 17:13:06.270938  565190 system_svc.go:44] waiting for kubelet service to be running ....
	I0329 17:13:06.270984  565190 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0329 17:13:06.271016  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0329 17:13:06.270990  565190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:13:06.358391  565190 system_svc.go:56] duration metric: took 87.440565ms WaitForService to wait for kubelet.
	I0329 17:13:06.358425  565190 kubeadm.go:548] duration metric: took 7.975172498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0329 17:13:06.358457  565190 node_conditions.go:102] verifying NodePressure condition ...
	I0329 17:13:06.362010  565190 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:13:06.362048  565190 node_conditions.go:123] node cpu capacity is 8
	I0329 17:13:06.362062  565190 node_conditions.go:105] duration metric: took 3.599372ms to run NodePressure ...
	I0329 17:13:06.362075  565190 start.go:213] waiting for startup goroutines ...
	I0329 17:13:06.422884  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0329 17:13:06.456469  565190 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0329 17:13:06.456501  565190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4842 bytes)
	I0329 17:13:06.512980  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0329 17:13:06.657389  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:06.749272  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:06.751405  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:06.751577  565190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0329 17:13:07.150197  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:07.252915  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:07.254284  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:07.660301  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:07.671833  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:07.748795  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:08.072215  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:08.254860  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:08.256804  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:08.647003  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:08.747056  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:08.749910  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:09.147920  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:09.248841  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:09.250516  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:09.647709  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:09.671963  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:09.674106  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:10.073904  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:10.175240  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:10.175608  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:10.647210  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:10.671989  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:10.748752  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:11.072721  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:11.173792  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:11.174911  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:11.572530  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:11.672122  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:11.674717  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:12.072652  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:12.172548  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:12.175805  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:12.372945  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.950014851s)
	I0329 17:13:12.373106  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.860077598s)
	I0329 17:13:12.373210  565190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (5.621612305s)
	I0329 17:13:12.374594  565190 addons.go:386] Verifying addon gcp-auth=true in "addons-20220329171213-564087"
	I0329 17:13:12.445845  565190 out.go:176] * Verifying gcp-auth addon...
	I0329 17:13:12.446990  565190 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0329 17:13:12.450609  565190 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0329 17:13:12.450637  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:12.572813  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:12.673007  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:12.673767  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:12.954543  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:13.071734  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:13.171180  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:13.172905  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:13.454206  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:13.572688  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:13.671266  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:13.672670  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:13.953877  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:14.072140  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:14.172102  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:14.173240  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:14.454530  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:14.572410  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:14.671376  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:14.673153  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:14.953927  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:15.072256  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:15.172378  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:15.173170  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:15.454182  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:15.573400  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:15.671983  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:15.673192  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:15.954857  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:16.071879  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:16.171867  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:16.174141  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:16.453849  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:16.572173  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:16.671285  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:16.673787  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:16.954118  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:17.073395  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:17.171988  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:17.175498  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:17.455124  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:17.573296  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:17.672455  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:17.673797  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:17.955072  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:18.073318  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:18.175617  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:18.176609  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:18.454812  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:18.572567  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:18.672732  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:18.673802  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:18.955434  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:19.073114  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:19.171570  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:19.172795  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:19.454654  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:19.572529  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:19.672301  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:19.674106  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:19.954502  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:20.071797  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:20.174313  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:20.175943  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:20.454873  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:20.572734  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:20.672388  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:20.673723  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:20.954333  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:21.072986  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:21.175439  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:21.176730  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:21.455268  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:21.573576  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:21.673125  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:21.673556  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:21.953703  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:22.072492  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:22.172336  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:22.173242  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:22.454855  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:22.572476  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:22.672139  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:22.672949  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:22.954331  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:23.073272  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:23.171845  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:23.173374  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:23.455221  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:23.573295  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:23.672992  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:23.673963  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:23.954320  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:24.073484  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:24.291858  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:24.293185  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:24.454932  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:24.572474  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:24.672281  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:24.673117  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:24.955053  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:25.073547  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:25.172903  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:25.173744  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:25.454698  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:25.572200  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:25.672030  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:25.673570  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:25.955369  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:26.073830  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:26.171571  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:26.173320  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:26.455392  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:26.573686  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:26.673012  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:26.674137  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:26.954516  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:27.072500  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:27.173353  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:27.174265  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:27.454612  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:27.572956  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:27.671272  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:27.675671  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:27.954752  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:28.075047  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:28.173612  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:28.173868  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:28.455330  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:28.571992  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:28.672200  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:28.673829  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:28.954947  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:29.073330  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:29.172225  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:29.173721  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:29.455302  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:29.573226  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:29.672767  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:29.673656  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:29.953940  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:30.072819  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:30.171638  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:30.172729  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:30.453878  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:30.572294  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:30.671917  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:30.673221  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:30.954646  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:31.072591  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:31.172458  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0329 17:13:31.173107  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:31.454097  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:31.572437  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:31.672217  565190 kapi.go:108] duration metric: took 25.519470686s to wait for kubernetes.io/minikube-addons=registry ...
	I0329 17:13:31.672960  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:31.954331  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:32.072626  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:32.175783  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:32.453775  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:32.573175  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:32.674201  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:32.955187  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:33.129870  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:33.173363  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:33.454563  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:33.571966  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:33.673666  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:33.954227  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:34.073019  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:34.174068  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:34.455111  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:34.572983  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:34.674225  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:34.954723  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:35.072701  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:35.173831  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:35.454210  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:35.574962  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:35.673271  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:35.955245  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:36.073349  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:36.174526  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:36.455037  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:36.573119  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:36.674563  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:36.954466  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:37.071917  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:37.177446  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:37.453989  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:37.572193  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:37.673476  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:37.954493  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:38.072742  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:38.175161  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:38.454049  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:38.572496  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:38.674038  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:38.954091  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:39.072785  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:39.173654  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:39.454202  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:39.573195  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:39.674615  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:39.954453  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:40.115910  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:40.173928  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:40.454518  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:40.571818  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:40.673511  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:40.953762  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:41.072109  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:41.173629  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:41.453661  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:41.572243  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:41.674166  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:41.953819  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:42.072281  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:42.173947  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:42.454756  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:42.572890  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:42.672290  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:42.954378  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:43.071823  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:43.173753  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:43.454212  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:43.573400  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:43.674468  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:43.954543  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:44.072930  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:44.173474  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:44.454657  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:44.572639  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:44.676570  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:44.954158  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:45.072781  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:45.173402  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:45.454174  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:45.572563  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:45.674550  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:45.953889  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:46.072785  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:46.173366  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:46.455172  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:46.573032  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:46.673643  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:46.953677  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:47.072261  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:47.174234  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:47.454777  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:47.572334  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:47.673798  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:47.954457  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:48.072088  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:48.173898  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:48.454307  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:48.573037  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:48.674555  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:48.955071  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:49.073273  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:49.174408  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:49.455257  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:49.572951  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:49.674599  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:49.954936  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:50.072739  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:50.174573  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:50.455257  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:50.573484  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:50.673643  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:50.972178  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:51.072644  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:51.173357  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:51.454479  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:51.572548  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:51.673878  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:51.954800  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:52.072136  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:52.174176  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:52.454577  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:52.572372  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:52.675201  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:52.954251  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:53.073273  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:53.173563  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:53.454854  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:53.573858  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:53.673154  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:53.954320  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:54.072055  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:54.173533  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:54.455895  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:54.572375  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:54.749190  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:54.959791  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:55.147948  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:55.173996  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:55.454105  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:55.573236  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:55.746364  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:55.954336  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:56.072733  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:56.175439  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:56.453990  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:56.573085  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:56.673885  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:56.954244  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:57.072197  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:57.173791  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:57.454372  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:57.571346  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:57.674075  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:57.954189  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:58.072370  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:58.173950  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:58.454221  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:58.572859  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:58.678314  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:58.954501  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:59.072799  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:59.175294  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:59.453688  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:13:59.572164  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:13:59.673565  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:13:59.954456  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:00.071524  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:00.174115  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:00.454656  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:00.571800  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:00.674120  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:00.954420  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:01.071779  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:01.174023  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:01.454471  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:01.572339  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:01.675496  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:01.954375  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:02.073458  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:02.174730  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:02.454657  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:02.572778  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:02.676007  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:02.953785  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:03.072794  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:03.173991  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:03.454025  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:03.572867  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:03.673732  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:03.954624  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:04.073005  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:04.174582  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:04.454150  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:04.573226  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:04.674308  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:04.953638  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:05.072195  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:05.174654  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:05.454125  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:05.573418  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:05.674243  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:05.954646  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:06.072437  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:06.174281  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:06.454015  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:06.572556  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:06.673939  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:06.954080  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:07.072666  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:07.173326  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:07.454723  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:07.572439  565190 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0329 17:14:07.674605  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:07.953922  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:08.072716  565190 kapi.go:108] duration metric: took 1m4.008496517s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0329 17:14:08.172965  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:08.454148  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:08.673129  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:08.954045  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:09.174844  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:09.453858  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:09.674490  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:09.953824  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:10.176853  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:10.453797  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:10.675343  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:10.953525  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:11.173408  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:11.454403  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:11.673325  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:11.954644  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:12.173912  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:12.454586  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:12.674150  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:12.955006  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:13.173550  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:13.453772  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:13.674538  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:13.954441  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:14.174656  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:14.454231  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:14.674172  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0329 17:14:14.953550  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:15.175219  565190 kapi.go:108] duration metric: took 1m9.019111348s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0329 17:14:15.453856  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:15.953640  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:16.453278  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:16.954471  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:17.454936  565190 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0329 17:14:17.954488  565190 kapi.go:108] duration metric: took 1m5.507497159s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0329 17:14:17.956447  565190 out.go:176] * Your GCP credentials will now be mounted into every pod created in the addons-20220329171213-564087 cluster.
	I0329 17:14:17.957764  565190 out.go:176] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0329 17:14:17.959110  565190 out.go:176] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0329 17:14:17.960623  565190 out.go:176] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, metrics-server, helm-tiller, olm, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0329 17:14:17.960642  565190 addons.go:417] enableAddons completed in 1m19.577371867s
	I0329 17:14:17.996502  565190 start.go:498] kubectl: 1.23.5, cluster: 1.23.5 (minor skew: 0)
	I0329 17:14:17.998645  565190 out.go:176] * Done! kubectl is now configured to use "addons-20220329171213-564087" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:12:30 UTC, end at Tue 2022-03-29 17:17:57 UTC. --
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.466585830Z" level=info msg="ignoring event" container=314a70ae55a571b0939f231e1d6a2e9138052b668f943f3c1b36ca1f88609224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.558827563Z" level=info msg="ignoring event" container=e1ae13bf4eb549c8d209754d2f4b1f07ee72144a2f2062155e29d29f0e164248 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.565490438Z" level=info msg="ignoring event" container=d4e27af3218de675d9fb7b653ce3f05ebd1031a1fca39cf5842733204f00b8da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.648030056Z" level=info msg="ignoring event" container=10d961a976bb923cc8e8963178199031d5db76b2296033236da82b1f12d0de4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.648823861Z" level=info msg="ignoring event" container=3c1879623e00e441c6f089b7bc4bec7978bfe9062ed4204fac394a63525fc973 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.658800962Z" level=info msg="ignoring event" container=4b8157fe0c1792748ae124ee43de64793f4514ecb4590862c93b89671aeec6b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.661846540Z" level=info msg="ignoring event" container=9058dcfdc9f87e3f37ffb6805877deb3f913785c8bfcc426d5f215b4cb3e64ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.662337825Z" level=info msg="ignoring event" container=48ee7cf560c5643be5e5b513956924ff6de7cacfd39e848517b74ffc52253cd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.846039498Z" level=info msg="ignoring event" container=872d97d1a1350ccdb49dbea081c39d6a26d326c31ddf3374cd56d7cb11f60220 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.945828107Z" level=info msg="ignoring event" container=376c5d2394777dbd088cea483ad6676ba1f5f7584bf4f8b1cb5fb9516db6eb67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.945872427Z" level=info msg="ignoring event" container=8be7dda363c0bd9e73160aef8119d79ea8f4678975ec5c08b22d8baa05e71303 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.951838915Z" level=info msg="ignoring event" container=74fcd6ed5088a699d30a99addf8464443365420fe9d8a4c3a040ac6f58820b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:08 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:08.961819203Z" level=info msg="ignoring event" container=3ac1a0f9968d19d3d15fad9eccf2f968360b4a681b8449b4d75577f2caacf0d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:10 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:10.892935440Z" level=info msg="ignoring event" container=5f4a718016eb9cbf756f6830b41a825f60a13375d09da1b5d32b452609a40ec4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:11 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:11.927995686Z" level=info msg="ignoring event" container=ed608b37d02a893483968751871eb38c225d2de6146348bf172c494fde0cc997 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:15 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:15.161583266Z" level=info msg="ignoring event" container=0ac1e4a86f4f6d4b2971cf5b3b3faf23e5d4b839b630fcf15f7cc729a7c972aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:15 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:15.162492214Z" level=info msg="ignoring event" container=0abe2be3e9111c1601115fd759bd1b9b457108d368b849cbbddea1ac9da85878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:15 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:15.279816382Z" level=info msg="ignoring event" container=ec3ac7abbe537a90ecae7848a9f73dc97467559ba60986375fbaf3bab25ffa57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:15 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:15.280495264Z" level=info msg="ignoring event" container=ca93e4824ad7ac2cceaa7a1164176e73a00c315fc81b1e90af939f13822b281f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:15:42 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:42.381366041Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Mar 29 17:15:43 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:15:43.411440714Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Mar 29 17:17:14 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:17:14.616509003Z" level=warning msg="reference for unknown type: " digest="sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4" remote="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Mar 29 17:17:16 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:17:16.867713928Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
	Mar 29 17:17:56 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:17:56.849688735Z" level=info msg="ignoring event" container=970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:17:56 addons-20220329171213-564087 dockerd[458]: time="2022-03-29T17:17:56.912393737Z" level=info msg="ignoring event" container=e1d1424af1d12c839e70eef559f60b37a5023118cf3d95bb0e0bc75cc73d6eb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                  CREATED             STATE               NAME                      ATTEMPT             POD ID
	a54a58db6153f       gcr.io/google-samples/hello-app@sha256:88b205d7995332e10e836514fbfd59ecaf8976fc15060cd66e85cdcebe7fb356                3 minutes ago       Running             hello-world-app           0                   4f75c8ab4ae61
	a9a7a4eb30d84       nginx@sha256:db7973cb238c8e8acea5982c1048b5987e9e4da60d20daeef7301757de97357a                                          3 minutes ago       Running             nginx                     0                   0b72f6a53cc82
	0f2d69dab9179       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:26c7b2454f1c946d7c80839251d939606620f37c2f275be2796c1ffd96c438f6           3 minutes ago       Running             gcp-auth                  0                   9dd11f5049c16
	bcc2e8fa244ca       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                 3 minutes ago       Running             packageserver             0                   6ff00ab075376
	bab40ffc8ae2b       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                 3 minutes ago       Running             packageserver             0                   4c62de4830f51
	138834108d884       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                 4 minutes ago       Running             olm-operator              0                   2ead8e7cd0e51
	ad60be4b34539       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                 4 minutes ago       Running             catalog-operator          0                   feb8f3561d1ad
	e1f6e7dbb722c       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da   4 minutes ago       Running             registry-proxy            0                   1a58ecbb2281b
	4105f71a34d4f       6e38f40d628db                                                                                                          4 minutes ago       Running             storage-provisioner       0                   176d02b2a46a5
	f31a76f2df949       a4ca41631cc7a                                                                                                          4 minutes ago       Running             coredns                   0                   a06b253a3493f
	3d359ed91790f       3c53fa8541f95                                                                                                          4 minutes ago       Running             kube-proxy                0                   4513b83712d27
	88f5e60c83d8f       25f8c7f3da61c                                                                                                          5 minutes ago       Running             etcd                      0                   19fc266a8976d
	22d6665aa5c31       b0c9e5e4dbb14                                                                                                          5 minutes ago       Running             kube-controller-manager   0                   b8dbd1a9eec1d
	397b826e67482       3fc1d62d65872                                                                                                          5 minutes ago       Running             kube-apiserver            0                   997d9cc1abd3b
	309aa4af4051b       884d49d6d8c9f                                                                                                          5 minutes ago       Running             kube-scheduler            0                   68e847e45c684
	
	* 
	* ==> coredns [f31a76f2df94] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20220329171213-564087
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20220329171213-564087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=addons-20220329171213-564087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_12_45_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20220329171213-564087
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:12:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20220329171213-564087
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:17:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:15:09 +0000   Tue, 29 Mar 2022 17:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:15:09 +0000   Tue, 29 Mar 2022 17:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:15:09 +0000   Tue, 29 Mar 2022 17:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:15:09 +0000   Tue, 29 Mar 2022 17:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20220329171213-564087
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                15d975f8-563f-4b96-a589-97a643638973
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86d5b6469f-fw48l                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  gcp-auth                    gcp-auth-59b76855d9-f8rrt                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 coredns-64897985d-kmjlr                                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m
	  kube-system                 etcd-addons-20220329171213-564087                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-apiserver-addons-20220329171213-564087             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-addons-20220329171213-564087    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-9mftv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-addons-20220329171213-564087             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 registry-proxy-mjdrr                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  olm                         catalog-operator-755d759b4b-frdvt                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         4m53s
	  olm                         olm-operator-c755654d4-ckb6s                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  olm                         operatorhubio-catalog-cfv9m                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         4m7s
	  olm                         packageserver-87889c6d8-4nq4c                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         4m4s
	  olm                         packageserver-87889c6d8-87kwp                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                800m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             560Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m57s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m20s (x5 over 5m20s)  kubelet     Node addons-20220329171213-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x5 over 5m20s)  kubelet     Node addons-20220329171213-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x4 over 5m20s)  kubelet     Node addons-20220329171213-564087 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m13s                  kubelet     Node addons-20220329171213-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s                  kubelet     Node addons-20220329171213-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s                  kubelet     Node addons-20220329171213-564087 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m13s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m2s                   kubelet     Node addons-20220329171213-564087 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +2.971844] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.027872] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.023925] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000035] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[Mar29 17:00] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.024781] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.023928] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +2.947817] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.019863] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000027] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.023920] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +2.955880] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.011812] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	[  +1.023917] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff 52 f4 ad 28 18 10 08 06
	
	* 
	* ==> etcd [88f5e60c83d8] <==
	* {"level":"info","ts":"2022-03-29T17:12:39.454Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:12:40.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-20220329171213-564087 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:12:40.278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:12:40.279Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:12:40.279Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:12:40.279Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:12:40.279Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-03-29T17:13:24.288Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.603668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:85601"}
	{"level":"info","ts":"2022-03-29T17:13:24.288Z","caller":"traceutil/trace.go:171","msg":"trace[276223399] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:976; }","duration":"119.764831ms","start":"2022-03-29T17:13:24.168Z","end":"2022-03-29T17:13:24.288Z","steps":["trace[276223399] 'range keys from in-memory index tree'  (duration: 119.416429ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:13:24.288Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.441242ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:85601"}
	{"level":"info","ts":"2022-03-29T17:13:24.288Z","caller":"traceutil/trace.go:171","msg":"trace[56517302] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:976; }","duration":"118.709633ms","start":"2022-03-29T17:13:24.169Z","end":"2022-03-29T17:13:24.288Z","steps":["trace[56517302] 'range keys from in-memory index tree'  (duration: 118.268788ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:13:50.970Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"192.636383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:13:50.970Z","caller":"traceutil/trace.go:171","msg":"trace[1705910593] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1079; }","duration":"192.730081ms","start":"2022-03-29T17:13:50.777Z","end":"2022-03-29T17:13:50.970Z","steps":["trace[1705910593] 'range keys from in-memory index tree'  (duration: 192.546011ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:17:58 up  2:00,  0 users,  load average: 0.20, 0.95, 1.34
	Linux addons-20220329171213-564087 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [397b826e6748] <==
	* E0329 17:14:10.148248       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.103.10.75:5443/apis/packages.operators.coreos.com/v1: Get "https://10.103.10.75:5443/apis/packages.operators.coreos.com/v1": dial tcp 10.103.10.75:5443: connect: connection refused
	W0329 17:14:10.777093       1 handler_proxy.go:104] no RequestInfo found in the context
	E0329 17:14:10.777158       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0329 17:14:10.777167       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	W0329 17:14:12.459909       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	E0329 17:14:12.459941       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	W0329 17:14:12.471934       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	E0329 17:14:12.471966       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	W0329 17:14:13.459841       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	E0329 17:14:13.459875       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	W0329 17:14:13.470525       1 dispatcher.go:180] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	E0329 17:14:13.470559       1 dispatcher.go:184] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.101.205.40:443: connect: connection refused
	E0329 17:14:15.792922       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.103.10.75:5443/apis/packages.operators.coreos.com/v1: Get "https://10.103.10.75:5443/apis/packages.operators.coreos.com/v1": context deadline exceeded
	E0329 17:14:15.803954       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again
	W0329 17:14:24.667166       1 dispatcher.go:153] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.106.215.24:443: connect: connection refused
	I0329 17:14:25.638123       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0329 17:14:25.798656       1 alloc.go:329] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.98.228.169]
	I0329 17:14:27.464397       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0329 17:14:39.452457       1 alloc.go:329] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.103.47.197]
	E0329 17:14:41.216692       1 watch.go:248] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc00cb67f50), InnerCloseNotifierFlusher:(*http2.responseWriter)(0xc00e942b80)}, encoder:(*versioning.codec)(0xc00f430960), buf:(*bytes.Buffer)(0xc00b938180)})
	I0329 17:14:46.309381       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0329 17:15:15.945149       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0329 17:15:15.968188       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0329 17:15:15.973354       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	
	* 
	* ==> kube-controller-manager [22d6665aa5c3] <==
	* E0329 17:15:32.810849       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:15:34.736652       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:15:34.736684       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:15:48.972505       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:15:48.972537       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:15:53.669183       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:15:53.669214       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:16:00.249599       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:16:00.249629       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:16:22.826177       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:16:22.826215       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:16:27.856187       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:16:27.856220       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:16:41.823109       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:16:41.823141       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:17:00.605127       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:17:00.605160       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:17:14.147467       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:17:14.147505       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:17:26.082038       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:17:26.082070       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:17:52.186695       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:17:52.186729       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0329 17:17:57.845952       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0329 17:17:57.845986       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [3d359ed91790] <==
	* I0329 17:13:00.446199       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:13:00.446300       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:13:00.446347       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:13:00.471973       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:13:00.472018       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:13:00.472039       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:13:00.472068       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:13:00.472600       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:13:00.473434       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:13:00.473462       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:13:00.473552       1 config.go:317] "Starting service config controller"
	I0329 17:13:00.473559       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:13:00.574040       1 shared_informer.go:247] Caches are synced for service config 
	I0329 17:13:00.574049       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [309aa4af4051] <==
	* E0329 17:12:42.560646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 17:12:42.560670       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0329 17:12:42.560710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0329 17:12:42.561588       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 17:12:42.561630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 17:12:43.384727       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:12:43.384756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:12:43.392783       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 17:12:43.392811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 17:12:43.399473       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:12:43.399505       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 17:12:43.455811       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:12:43.455855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:12:43.646221       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:12:43.646293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:12:43.653390       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:12:43.653424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0329 17:12:43.735710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:12:43.735749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:12:43.743859       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0329 17:12:43.743879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:12:43.845283       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:12:43.845320       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:12:45.946315       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0329 17:12:46.757409       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:12:30 UTC, end at Tue 2022-03-29 17:17:58 UTC. --
	Mar 29 17:15:43 addons-20220329171213-564087 kubelet[1908]: E0329 17:15:43.414236    1908 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Mar 29 17:15:43 addons-20220329171213-564087 kubelet[1908]: E0329 17:15:43.414392    1908 kuberuntime_manager.go:919] container &Container{Name:registry-server,Image:quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfpz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGrace
PeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod operatorhubio-catalog-cfv9m_olm(9acb0f4f-d7d6-496d-99bd-5bf4882f2b51): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be
738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown
	Mar 29 17:15:43 addons-20220329171213-564087 kubelet[1908]: E0329 17:15:43.414452    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:15:45 addons-20220329171213-564087 kubelet[1908]: I0329 17:15:45.984179    1908 scope.go:110] "RemoveContainer" containerID="5f4a718016eb9cbf756f6830b41a825f60a13375d09da1b5d32b452609a40ec4"
	Mar 29 17:15:55 addons-20220329171213-564087 kubelet[1908]: E0329 17:15:55.881109    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:16:07 addons-20220329171213-564087 kubelet[1908]: E0329 17:16:07.880675    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:16:21 addons-20220329171213-564087 kubelet[1908]: E0329 17:16:21.880712    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:16:34 addons-20220329171213-564087 kubelet[1908]: E0329 17:16:34.879940    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:16:46 addons-20220329171213-564087 kubelet[1908]: E0329 17:16:46.880539    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:16:59 addons-20220329171213-564087 kubelet[1908]: E0329 17:16:59.880586    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:17:16 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:16.869759    1908 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Mar 29 17:17:16 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:16.869805    1908 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown" image="quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4"
	Mar 29 17:17:16 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:16.869923    1908 kuberuntime_manager.go:919] container &Container{Name:registry-server,Image:quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfpz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGrace
PeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod operatorhubio-catalog-cfv9m_olm(9acb0f4f-d7d6-496d-99bd-5bf4882f2b51): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be
738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown
	Mar 29 17:17:16 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:16.869961    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4 not found: manifest unknown: manifest unknown\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:17:27 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:27.881102    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:17:40 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:40.880117    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:17:51 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:51.880671    1908 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/operatorhubio/catalog@sha256:e08a1cd21fe72dd1be92be738b4bf1515298206dac5479c17a4b3ed119e30bd4\\\"\"" pod="olm/operatorhubio-catalog-cfv9m" podUID=9acb0f4f-d7d6-496d-99bd-5bf4882f2b51
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.176690    1908 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cx2x\" (UniqueName: \"kubernetes.io/projected/61f2771f-eaa4-4100-b508-09c5d372da15-kube-api-access-9cx2x\") pod \"61f2771f-eaa4-4100-b508-09c5d372da15\" (UID: \"61f2771f-eaa4-4100-b508-09c5d372da15\") "
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.179205    1908 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f2771f-eaa4-4100-b508-09c5d372da15-kube-api-access-9cx2x" (OuterVolumeSpecName: "kube-api-access-9cx2x") pod "61f2771f-eaa4-4100-b508-09c5d372da15" (UID: "61f2771f-eaa4-4100-b508-09c5d372da15"). InnerVolumeSpecName "kube-api-access-9cx2x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.277417    1908 reconciler.go:300] "Volume detached for volume \"kube-api-access-9cx2x\" (UniqueName: \"kubernetes.io/projected/61f2771f-eaa4-4100-b508-09c5d372da15-kube-api-access-9cx2x\") on node \"addons-20220329171213-564087\" DevicePath \"\""
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.369466    1908 scope.go:110] "RemoveContainer" containerID="970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b"
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.383617    1908 scope.go:110] "RemoveContainer" containerID="970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b"
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: E0329 17:17:57.384499    1908 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b" containerID="970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b"
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.384578    1908 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b} err="failed to get container status \"970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b\": rpc error: code = Unknown desc = Error: No such container: 970ee426606dbb0b2f5c048dc35ee01816bfeeb719fc1dd3cb64a8596375280b"
	Mar 29 17:17:57 addons-20220329171213-564087 kubelet[1908]: I0329 17:17:57.896793    1908 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=61f2771f-eaa4-4100-b508-09c5d372da15 path="/var/lib/kubelet/pods/61f2771f-eaa4-4100-b508-09c5d372da15/volumes"
	
	* 
	* ==> storage-provisioner [4105f71a34d4] <==
	* I0329 17:13:06.862463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:13:06.955242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:13:06.955302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:13:07.054855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:13:07.055036       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20220329171213-564087_0bdb67a8-9f3d-468f-b37d-bcab411ea436!
	I0329 17:13:07.058945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"259b9eb9-4903-492e-8a33-8a9dedf12305", APIVersion:"v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20220329171213-564087_0bdb67a8-9f3d-468f-b37d-bcab411ea436 became leader
	I0329 17:13:07.155283       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20220329171213-564087_0bdb67a8-9f3d-468f-b37d-bcab411ea436!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20220329171213-564087 -n addons-20220329171213-564087
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20220329171213-564087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: operatorhubio-catalog-cfv9m
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20220329171213-564087 describe pod operatorhubio-catalog-cfv9m
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20220329171213-564087 describe pod operatorhubio-catalog-cfv9m: exit status 1 (60.228199ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "operatorhubio-catalog-cfv9m" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20220329171213-564087 describe pod operatorhubio-catalog-cfv9m: exit status 1
--- FAIL: TestAddons/parallel/Registry (220.88s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220329171943-564087 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:915: output didn't produce a URL
functional_test.go:907: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220329171943-564087 --alsologtostderr -v=1] ...
functional_test.go:907: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220329171943-564087 --alsologtostderr -v=1] stdout:
functional_test.go:907: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220329171943-564087 --alsologtostderr -v=1] stderr:
I0329 17:34:58.125754  611206 out.go:297] Setting OutFile to fd 1 ...
I0329 17:34:58.125860  611206 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0329 17:34:58.125869  611206 out.go:310] Setting ErrFile to fd 2...
I0329 17:34:58.125873  611206 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0329 17:34:58.125987  611206 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
I0329 17:34:58.126163  611206 mustload.go:65] Loading cluster: functional-20220329171943-564087
I0329 17:34:58.126515  611206 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0329 17:34:58.126876  611206 cli_runner.go:133] Run: docker container inspect functional-20220329171943-564087 --format={{.State.Status}}
I0329 17:34:58.162685  611206 host.go:66] Checking if "functional-20220329171943-564087" exists ...
I0329 17:34:58.163017  611206 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0329 17:34:58.264780  611206 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:58.200116328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0329 17:34:58.264920  611206 api_server.go:165] Checking apiserver status ...
I0329 17:34:58.264981  611206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0329 17:34:58.265032  611206 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329171943-564087
I0329 17:34:58.304073  611206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49464 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329171943-564087/id_rsa Username:docker}
I0329 17:34:58.451415  611206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10579/cgroup
I0329 17:34:58.459830  611206 api_server.go:181] apiserver freezer: "3:freezer:/docker/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/kubepods/burstable/pode1342a603e0fca23a3870306f2976b58/c0967de79e035d02f3ee014e26a734ff96c7dc2c676e98f62c3e670536f00e10"
I0329 17:34:58.459934  611206 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/kubepods/burstable/pode1342a603e0fca23a3870306f2976b58/c0967de79e035d02f3ee014e26a734ff96c7dc2c676e98f62c3e670536f00e10/freezer.state
I0329 17:34:58.467728  611206 api_server.go:203] freezer state: "THAWED"
I0329 17:34:58.467770  611206 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0329 17:34:58.473437  611206 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
ok
W0329 17:34:58.473496  611206 out.go:241] * Enabling dashboard ...
* Enabling dashboard ...
I0329 17:34:58.473733  611206 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0329 17:34:58.473754  611206 addons.go:65] Setting dashboard=true in profile "functional-20220329171943-564087"
I0329 17:34:58.473764  611206 addons.go:153] Setting addon dashboard=true in "functional-20220329171943-564087"
I0329 17:34:58.473799  611206 host.go:66] Checking if "functional-20220329171943-564087" exists ...
I0329 17:34:58.474320  611206 cli_runner.go:133] Run: docker container inspect functional-20220329171943-564087 --format={{.State.Status}}
I0329 17:34:58.512631  611206 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
I0329 17:34:58.513881  611206 out.go:176]   - Using image kubernetesui/metrics-scraper:v1.0.7
I0329 17:34:58.513941  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0329 17:34:58.513955  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0329 17:34:58.514022  611206 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329171943-564087
I0329 17:34:58.545721  611206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49464 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329171943-564087/id_rsa Username:docker}
I0329 17:34:58.656111  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0329 17:34:58.656140  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0329 17:34:58.672057  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0329 17:34:58.672081  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0329 17:34:58.687231  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0329 17:34:58.687258  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0329 17:34:58.758342  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0329 17:34:58.758364  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0329 17:34:58.773861  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
I0329 17:34:58.773892  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0329 17:34:58.788363  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0329 17:34:58.788391  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0329 17:34:58.855036  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0329 17:34:58.855069  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0329 17:34:58.869585  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0329 17:34:58.869614  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0329 17:34:58.885677  611206 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:34:58.885706  611206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0329 17:34:58.951260  611206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:34:59.744460  611206 addons.go:116] Writing out "functional-20220329171943-564087" config to set dashboard=true...
W0329 17:34:59.744831  611206 out.go:241] * Verifying dashboard health ...
* Verifying dashboard health ...
I0329 17:34:59.745861  611206 kapi.go:59] client config for functional-20220329171943-564087: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/function
al-20220329171943-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0329 17:34:59.756026  611206 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  a6ec0301-d534-42ce-b36e-df2345be66df 768 0 2022-03-29 17:34:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-03-29 17:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.186.203,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.186.203],IPFamilies:[IPv4],AllocateLoadBala
ncerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0329 17:34:59.756217  611206 out.go:241] * Launching proxy ...
* Launching proxy ...
I0329 17:34:59.756286  611206 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220329171943-564087 proxy --port 36195]
I0329 17:34:59.756539  611206 dashboard.go:157] Waiting for kubectl to output host:port ...
I0329 17:34:59.798905  611206 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0329 17:34:59.798967  611206 out.go:241] * Verifying proxy health ...
* Verifying proxy health ...
I0329 17:34:59.853576  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75c9fdc5-af09-4ff0-8486-1eafd6bb6817] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc00070d0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7300 TLS:<nil>}
I0329 17:34:59.853690  611206 retry.go:31] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.857582  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c462be81-289d-45e5-8735-7a18a22c2745] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc0004ad780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7400 TLS:<nil>}
I0329 17:34:59.857664  611206 retry.go:31] will retry after 216.077µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.860802  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2ece678-d6c7-425e-8b28-677585b7e2c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc00070d1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666c00 TLS:<nil>}
I0329 17:34:59.860858  611206 retry.go:31] will retry after 262.026µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.864092  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f63f8f19-3009-4b86-95fb-4a58c2c975e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7500 TLS:<nil>}
I0329 17:34:59.864158  611206 retry.go:31] will retry after 316.478µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.867292  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff1e3247-d227-43da-b665-8555d8db3a2c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc00070d2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bca00 TLS:<nil>}
I0329 17:34:59.867349  611206 retry.go:31] will retry after 468.098µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.870464  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[917ff8c0-1fb8-4b30-a802-ed4ae7d5f55e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7600 TLS:<nil>}
I0329 17:34:59.870527  611206 retry.go:31] will retry after 901.244µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.873553  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b536529f-c84c-4ac3-9b28-221706bb501b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc0004ad8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bcb00 TLS:<nil>}
I0329 17:34:59.873612  611206 retry.go:31] will retry after 644.295µs: Temporary Error: unexpected response code: 503
I0329 17:34:59.876668  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abb707bc-70f9-4392-93e9-0e408e7cc196] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666d00 TLS:<nil>}
I0329 17:34:59.876717  611206 retry.go:31] will retry after 1.121724ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.880779  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f2564990-3506-46ae-bb2e-99f05580ee3a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc00070d3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bcc00 TLS:<nil>}
I0329 17:34:59.880828  611206 retry.go:31] will retry after 1.529966ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.945326  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[042a5195-9375-4b24-9882-8a7bf950ac00] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7700 TLS:<nil>}
I0329 17:34:59.945400  611206 retry.go:31] will retry after 3.078972ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.951171  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3bfad91-df8b-4e8d-86bb-657e27b67a0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc0004ad9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bcd00 TLS:<nil>}
I0329 17:34:59.951238  611206 retry.go:31] will retry after 5.854223ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.960177  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6841cca3-02ef-4df4-b5aa-7d763b897825] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666e00 TLS:<nil>}
I0329 17:34:59.960246  611206 retry.go:31] will retry after 11.362655ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.975269  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59977c45-1eae-435a-8c3e-926ad7b52dd4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc0004adac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bce00 TLS:<nil>}
I0329 17:34:59.975358  611206 retry.go:31] will retry after 9.267303ms: Temporary Error: unexpected response code: 503
I0329 17:34:59.988354  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aac9e53f-256e-4905-850c-4bba3af6637c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:34:59 GMT]] Body:0xc000d4f8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666f00 TLS:<nil>}
I0329 17:34:59.988426  611206 retry.go:31] will retry after 17.139291ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.046063  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b2cb67b-8ac8-4e21-bd2c-626810043b89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc00070d4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bcf00 TLS:<nil>}
I0329 17:35:00.046149  611206 retry.go:31] will retry after 23.881489ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.073301  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90657f21-8874-40c7-a98f-f23688ca523a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc0004adbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7800 TLS:<nil>}
I0329 17:35:00.073378  611206 retry.go:31] will retry after 42.427055ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.146462  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fba2dfdf-3dc2-44bd-b946-6b0197a8c667] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc00070d5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000667000 TLS:<nil>}
I0329 17:35:00.146527  611206 retry.go:31] will retry after 51.432832ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.244732  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6a9d57a-e602-45e8-8484-90ac0d211bde] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc000d4f9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7900 TLS:<nil>}
I0329 17:35:00.244833  611206 retry.go:31] will retry after 78.14118ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.346345  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e5e1e32-bfba-4d65-965c-db28a35c40cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc00070d6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd000 TLS:<nil>}
I0329 17:35:00.346437  611206 retry.go:31] will retry after 174.255803ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.545331  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8153aa19-bb79-4963-9a1d-25965f3f30c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc0004adcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7a00 TLS:<nil>}
I0329 17:35:00.545408  611206 retry.go:31] will retry after 159.291408ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.745448  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe505bfb-a8ea-470f-b87c-08f4571f076c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc000d4fac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000667100 TLS:<nil>}
I0329 17:35:00.745523  611206 retry.go:31] will retry after 233.827468ms: Temporary Error: unexpected response code: 503
I0329 17:35:00.982554  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b2fabe5-7298-4387-a920-e97c935c1fef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:00 GMT]] Body:0xc0004addc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd100 TLS:<nil>}
I0329 17:35:00.982643  611206 retry.go:31] will retry after 429.392365ms: Temporary Error: unexpected response code: 503
I0329 17:35:01.415069  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f061fc2-c59f-46fa-b6f6-3b09d1fe9c01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:01 GMT]] Body:0xc000d4fbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000667200 TLS:<nil>}
I0329 17:35:01.415135  611206 retry.go:31] will retry after 801.058534ms: Temporary Error: unexpected response code: 503
I0329 17:35:02.219362  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf2be50d-fb1a-4221-9094-a65decb067b8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:02 GMT]] Body:0xc0004adec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd200 TLS:<nil>}
I0329 17:35:02.219449  611206 retry.go:31] will retry after 1.529087469s: Temporary Error: unexpected response code: 503
I0329 17:35:03.751484  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[baea5d8f-c96f-4b40-b547-42bf17bf0cbd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:03 GMT]] Body:0xc000d4fcc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000667300 TLS:<nil>}
I0329 17:35:03.751563  611206 retry.go:31] will retry after 1.335136154s: Temporary Error: unexpected response code: 503
I0329 17:35:05.090540  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1419f1bf-0082-437f-beea-5594ef4890bd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:05 GMT]] Body:0xc000d4fd80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd300 TLS:<nil>}
I0329 17:35:05.090637  611206 retry.go:31] will retry after 2.012724691s: Temporary Error: unexpected response code: 503
I0329 17:35:07.106345  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2130b55f-b371-41c3-a4c0-b8137c83670d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:07 GMT]] Body:0xc00070d800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd400 TLS:<nil>}
I0329 17:35:07.106413  611206 retry.go:31] will retry after 4.744335389s: Temporary Error: unexpected response code: 503
I0329 17:35:11.855282  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[672eb8ae-7fb3-467c-b02e-0b488d989f8b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:11 GMT]] Body:0xc000d4fe80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7b00 TLS:<nil>}
I0329 17:35:11.855347  611206 retry.go:31] will retry after 4.014454686s: Temporary Error: unexpected response code: 503
I0329 17:35:15.873466  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[481882b3-8884-4611-a255-a9383e70230a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:15 GMT]] Body:0xc000d4ff40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd500 TLS:<nil>}
I0329 17:35:15.873534  611206 retry.go:31] will retry after 11.635741654s: Temporary Error: unexpected response code: 503
I0329 17:35:27.516379  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8be6f09-5485-446b-8be1-bddc03331d95] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:27 GMT]] Body:0xc00070d900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd600 TLS:<nil>}
I0329 17:35:27.516448  611206 retry.go:31] will retry after 15.298130033s: Temporary Error: unexpected response code: 503
I0329 17:35:42.818177  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e3693404-b922-4b3a-8875-5abae76782d3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:35:42 GMT]] Body:0xc000406080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd800 TLS:<nil>}
I0329 17:35:42.818264  611206 retry.go:31] will retry after 19.631844237s: Temporary Error: unexpected response code: 503
I0329 17:36:02.454915  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9db654cb-a995-4d6c-a1f1-8d23eb863226] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:36:02 GMT]] Body:0xc00006a600 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7c00 TLS:<nil>}
I0329 17:36:02.454992  611206 retry.go:31] will retry after 15.195386994s: Temporary Error: unexpected response code: 503
I0329 17:36:17.653274  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ee8a34b-efdd-4005-ac2e-e03431da41d4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:36:17 GMT]] Body:0xc00006a780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001bd900 TLS:<nil>}
I0329 17:36:17.653341  611206 retry.go:31] will retry after 28.402880652s: Temporary Error: unexpected response code: 503
I0329 17:36:46.061576  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[122bb989-60c5-488b-9a73-c213f8424fe5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:36:46 GMT]] Body:0xc00070da40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de7d00 TLS:<nil>}
I0329 17:36:46.061640  611206 retry.go:31] will retry after 1m6.435206373s: Temporary Error: unexpected response code: 503
I0329 17:37:52.501751  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2488a495-f7da-4875-92f5-75dd96283e62] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:37:52 GMT]] Body:0xc000e5a100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666100 TLS:<nil>}
I0329 17:37:52.501834  611206 retry.go:31] will retry after 1m28.514497132s: Temporary Error: unexpected response code: 503
I0329 17:39:21.020368  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0fe2a14-aa0b-40a1-8d7d-a29dea36dbec] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:39:21 GMT]] Body:0xc000406100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000666200 TLS:<nil>}
I0329 17:39:21.020481  611206 retry.go:31] will retry after 34.767217402s: Temporary Error: unexpected response code: 503
I0329 17:39:55.793132  611206 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c28b08d-5f52-4641-a179-5c62e917ce87] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 29 Mar 2022 17:39:55 GMT]] Body:0xc00070c880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000de6500 TLS:<nil>}
I0329 17:39:55.793219  611206 retry.go:31] will retry after 1m5.688515861s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220329171943-564087
helpers_test.go:236: (dbg) docker inspect functional-20220329171943-564087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601",
	        "Created": "2022-03-29T17:19:53.028060184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 589271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:19:53.388807933Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/hostname",
	        "HostsPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/hosts",
	        "LogPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601-json.log",
	        "Name": "/functional-20220329171943-564087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220329171943-564087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220329171943-564087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e-init/diff:/var/lib/docker/overlay2/9db4e23be625e034f4ded606113a10eac42e47ab03824d2ab674189ac3bfe07b/diff:/var/lib/docker/overlay2/23cb119bfb0f25fd9defc73c170f1edc0bcfc13d6d5cd5613108d72d2020b31c/diff:/var/lib/docker/overlay2/bc76d55655624ec99d26daa97a683f1a970449af5a278430e255d62e3f8b7357/diff:/var/lib/docker/overlay2/ec38188e1f99f15e49cbf2bb0c04cafd5ff241ea7966de30f2b4201c74cb77cb/diff:/var/lib/docker/overlay2/a5d5403dacc48240e9b97d1b8e55974405d1cf196bfcfa0ca32548f269cc1071/diff:/var/lib/docker/overlay2/9b4ccea6c0eb5887c76137ed35db5e0e51cf583e7c5034dcee8dd746f9a5c3bb/diff:/var/lib/docker/overlay2/8938344848e3a72fe363a3ed45041a50457e8ce2a391113dd515f7afd6d909db/diff:/var/lib/docker/overlay2/b6696995e5a26e0378be0861a49fb24498de5c915b3c02bd34ae778e05b48a9d/diff:/var/lib/docker/overlay2/f95310f65d1c113884a9ac4dc0f127daf9d1b3f623762106478e4fe41692cc2d/diff:/var/lib/docker/overlay2/30ef7d
70756fc9f43cfd45ede0c78a5dbd376911f1844027d7dd8448f0d1bd2c/diff:/var/lib/docker/overlay2/aeeca576548699f29ecc5f8389942ed3bfde02e1b481e0e8365142a90064496c/diff:/var/lib/docker/overlay2/5ba2587df64129d8cf8c96c14448186757d9b360c9e3101c4a20b1edd728ce18/diff:/var/lib/docker/overlay2/64d1213878e17d1927644c40bb0d52e6a3a124b5e86daa58f166ee0704d9da9b/diff:/var/lib/docker/overlay2/7ac9b531b4439100cfb4789e5009915d72b467705e391e0d197a760783cb4e4b/diff:/var/lib/docker/overlay2/f6f1442868cd491bc73dc995e7c0b552c0d2843d43327267ee3d015edc11da4e/diff:/var/lib/docker/overlay2/c7c6c9113fac60b95369a3e535649a67c14c4c74da4c7de68bd1aaf14bce0ac3/diff:/var/lib/docker/overlay2/9eba2b84f547941ca647ea1c9eff5275fae385f1b800741ed421672c6437487a/diff:/var/lib/docker/overlay2/8bb3fb7770413b61ccdf84f4a5cccb728206fcecd1f006ca906874d3c5d4481c/diff:/var/lib/docker/overlay2/7ebf161ae3775c9e0f6ebe9e26d40e46766d5f3387c2ea279679d585cbd19866/diff:/var/lib/docker/overlay2/4d1064116e64fbf54de0c8ef70255b6fc77b005725e02a52281bfa0e5de5a7af/diff:/var/lib/d
ocker/overlay2/f82ba82619b078a905b7e5a1466fc8ca89d8664fa04dc61cf5914aa0c34ae177/diff:/var/lib/docker/overlay2/728d17980e4c7c100416d2fd1be83673103f271144543fb61798e4a0303c1d63/diff:/var/lib/docker/overlay2/d7e175c39be427bc2372876df06eb27ba2b10462c347d1ee8e43a957642f2ca5/diff:/var/lib/docker/overlay2/1e872f98bd0c0432c85e2812af12d33dcacc384f762347889c846540583137be/diff:/var/lib/docker/overlay2/f5da27e443a249317e2670de2816cbae827a62edb0e4475ac004418a25e279d8/diff:/var/lib/docker/overlay2/33e17a308b62964f37647c1f62c13733476a7eaadb28f29ad1d1f21b5d0456ee/diff:/var/lib/docker/overlay2/6b6bb10e19be67a77e94bd177e583241953840e08b30d68eca16b63e2c5fd574/diff:/var/lib/docker/overlay2/8e061338d4e4cf068f61861fc08144097ee117189101f3a71f361481dc288fd3/diff:/var/lib/docker/overlay2/27d99a6f864614a9dad7efdece7ace23256ff5489d66daed625285168e2fcc48/diff:/var/lib/docker/overlay2/8642d51376c5c35316cb2d9d5832c7382cb5e0d9df1b766f5187ab10eaafb4d6/diff:/var/lib/docker/overlay2/9ffbd3f47292209200a9ab357ba5f68beb15c82f2511804d74dcf2ad3b4
4155f/diff:/var/lib/docker/overlay2/d2512b29dd494ed5dc05b52800efe6a97b07803c1d3172d6a9d9b0b45a7e19eb/diff:/var/lib/docker/overlay2/7e87858609885bf7a576966de8888d2db30e18d8b582b6f6434176c59d71cca5/diff:/var/lib/docker/overlay2/54e00a6514941a66517f8aa879166fd5e8660f7ab673e554aa927bfcb19a145d/diff:/var/lib/docker/overlay2/02ced31172683ffa2fe2365aa827ef66d364bd100865b9095680e2c79f2e868e/diff:/var/lib/docker/overlay2/e65eba629c5d8828d9a2c4b08b322edb4b07793e8bfb091b93fd15013209a387/diff:/var/lib/docker/overlay2/3ee0fd224e7a66a3d8cc598c64cdaf0436eab7f466aa34e3406a0058e16a7f30/diff:/var/lib/docker/overlay2/29b13dceeebd7568b56f69e176c7d37f5b88fe4c13065f01a6f3a36606d5b62c/diff:/var/lib/docker/overlay2/b10262d215789890fd0056a6e4ff379df5e663524b5b96d9671e10c54adc5a25/diff:/var/lib/docker/overlay2/a292b90c390a4decbdd1887aa58471b2827752df1ef18358a1fb82fd665de0b4/diff:/var/lib/docker/overlay2/fbac86c28573a8fd7399f9fd0a51ebb8eef8158b8264c242aa16e16f6227522f/diff:/var/lib/docker/overlay2/b0ddb339636d56ff9132bc75064a21216c2e71
f3b3b53d4a39f9fe66133219c2/diff:/var/lib/docker/overlay2/9e52af85e3d331425d5757a9bde2ace3e5e12622a0d748e6559c2a74907adaa1/diff:/var/lib/docker/overlay2/e856b1e5a3fe78b31306313bdf9bc42d7b1f45dc864587f3ce5dfd3793cb96d3/diff:/var/lib/docker/overlay2/1fbed3ccb397ff1873888dc253845b880a4d30dda3b181220402f7592d8a3ad7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220329171943-564087",
	                "Source": "/var/lib/docker/volumes/functional-20220329171943-564087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220329171943-564087",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220329171943-564087",
	                "name.minikube.sigs.k8s.io": "functional-20220329171943-564087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7d36d2cb4b207d354d3edeb8bd57ffddaa9cea8689c8c63eb17908564341080",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49463"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49462"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49461"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7d36d2cb4b2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220329171943-564087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "15ca43c1287c",
	                        "functional-20220329171943-564087"
	                    ],
	                    "NetworkID": "0f425cae9470b2772507f4689a703cb8884a8795e13bf34ba02a82ab3aa92e69",
	                    "EndpointID": "ad09f7d8cfe5db54b7232c5900e56038d6cf0563cc03270369e9c0b3f05bf5c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220329171943-564087 -n functional-20220329171943-564087
helpers_test.go:245: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 logs -n 25: (1.05534847s)
helpers_test.go:253: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                            |             Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:52 UTC | Tue, 29 Mar 2022 17:34:53 UTC |
	|         | service list                                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:53 UTC | Tue, 29 Mar 2022 17:34:54 UTC |
	|         | service --namespace=default                               |                                  |         |         |                               |                               |
	|         | --https --url hello-node                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:54 UTC | Tue, 29 Mar 2022 17:34:55 UTC |
	|         | service hello-node --url                                  |                                  |         |         |                               |                               |
	|         | --format={{.IP}}                                          |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:55 UTC | Tue, 29 Mar 2022 17:34:56 UTC |
	|         | service hello-node --url                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:05 UTC | Tue, 29 Mar 2022 17:35:05 UTC |
	|         | ssh stat                                                  |                                  |         |         |                               |                               |
	|         | /mount-9p/created-by-test                                 |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:05 UTC | Tue, 29 Mar 2022 17:35:05 UTC |
	|         | ssh stat                                                  |                                  |         |         |                               |                               |
	|         | /mount-9p/created-by-pod                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:06 UTC | Tue, 29 Mar 2022 17:35:06 UTC |
	|         | ssh sudo umount -f /mount-9p                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:07 UTC | Tue, 29 Mar 2022 17:35:07 UTC |
	|         | ssh findmnt -T /mount-9p | grep                           |                                  |         |         |                               |                               |
	|         | 9p                                                        |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:07 UTC | Tue, 29 Mar 2022 17:35:07 UTC |
	|         | ssh -- ls -la /mount-9p                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:08 UTC | Tue, 29 Mar 2022 17:35:08 UTC |
	|         | cp testdata/cp-test.txt                                   |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:08 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | ssh -n                                                    |                                  |         |         |                               |                               |
	|         | functional-20220329171943-564087                          |                                  |         |         |                               |                               |
	|         | sudo cat                                                  |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087 cp                       | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | functional-20220329171943-564087:/home/docker/cp-test.txt |                                  |         |         |                               |                               |
	|         | /tmp/mk_test2393693594/cp-test.txt                        |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | ssh -n                                                    |                                  |         |         |                               |                               |
	|         | functional-20220329171943-564087                          |                                  |         |         |                               |                               |
	|         | sudo cat                                                  |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | version --short                                           |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | version -o=json --components                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:11 UTC |
	|         | image ls --format short                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:11 UTC |
	|         | image ls --format yaml                                    |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087 image build -t           | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | localhost/my-image:functional-20220329171943-564087       |                                  |         |         |                               |                               |
	|         | testdata/build                                            |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls --format json                                    |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls                                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls --format table                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:38:22 UTC | Tue, 29 Mar 2022 17:38:23 UTC |
	|         | logs -n 25                                                |                                  |         |         |                               |                               |
	|---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:34:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:34:57.797333  611076 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:34:57.798018  611076 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.798033  611076 out.go:310] Setting ErrFile to fd 2...
	I0329 17:34:57.798038  611076 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.798251  611076 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:34:57.798630  611076 out.go:304] Setting JSON to false
	I0329 17:34:57.800308  611076 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8251,"bootTime":1648567047,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:34:57.800406  611076 start.go:124] virtualization: kvm guest
	I0329 17:34:57.802464  611076 out.go:176] * [functional-20220329171943-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:34:57.804088  611076 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:57.805283  611076 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:57.806483  611076 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:34:57.807703  611076 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:34:57.808872  611076 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:34:57.809354  611076 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:57.809794  611076 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:57.849907  611076 docker.go:137] docker version: linux-20.10.14
	I0329 17:34:57.850022  611076 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:57.946030  611076 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:57.884553476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:57.946166  611076 docker.go:254] overlay module found
	I0329 17:34:57.948169  611076 out.go:176] * Using the docker driver based on existing profile
	I0329 17:34:57.948212  611076 start.go:283] selected driver: docker
	I0329 17:34:57.948220  611076 start.go:800] validating driver "docker" against &{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:57.948377  611076 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 17:34:57.948800  611076 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:58.047988  611076 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:57.98538321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:58.048715  611076 cni.go:93] Creating CNI manager for ""
	I0329 17:34:58.048746  611076 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:34:58.048760  611076 start_flags.go:306] config:
	{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:fa
lse volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:19:53 UTC, end at Tue 2022-03-29 17:39:59 UTC. --
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.302786741Z" level=info msg="ignoring event" container=8fec37f9f6d6fefdc9dfa2f5f5d4fadad89668edce59bcfd2de9eafa6e0dbc3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.479818674Z" level=info msg="ignoring event" container=2bcded85e6133f08c6b9d3edd7fc3ac1fa56bfe038e90b5d795fb8f2d03d9643 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.598403679Z" level=info msg="ignoring event" container=0096f55eda7a59b969520aad33f95ef2f4e88449065f34aea426d256f6710870 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.714223542Z" level=info msg="ignoring event" container=5968dceb92dd3a1a2df9ee2bea931e1c9226f400d793729ba9a9c9d772fa0041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.830938882Z" level=info msg="ignoring event" container=765e783db02ae8f5b3727b2fc3c7cff355b17e5aabf9c7df6665716eee5f28ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.946671748Z" level=info msg="ignoring event" container=b33e578e2672074fffe15df6768bea414c03a08387deb2dab2ad70f4c9fc29ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:02.723431289Z" level=info msg="ignoring event" container=15e4bda80e7be6b0247b0ac5f1b80113d5ac694fa7d91d40f17a7402bae7d6af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:02.894166771Z" level=info msg="ignoring event" container=4bf5fc3e4e55c47cda60d241917b28576a8a0491fb053877f25751c540d2d340 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:11 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:11.371464080Z" level=info msg="ignoring event" container=7b4349a9f03d98e6a09125c57223ee392fa9351166515598c5af3fdb7aa6a145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:11 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:11.431280198Z" level=info msg="ignoring event" container=448d20dc3f6b953a8980171f0d104530d77b515432e9b279770ff4346946450c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:19 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:19.290000428Z" level=info msg="ignoring event" container=1023acaf289a43a485e447984175897c7495df26906277ddaaef1ec5a2d07535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:44 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:44.285316359Z" level=info msg="ignoring event" container=40db2146ec0aeb8ff89fc7908270a6a2d2c6bc93c7849a5f48436f72c53850ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:31:25 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:31:25.293278927Z" level=info msg="ignoring event" container=bbf0ed65b38054208296636ac3b226a5e5a4c668ff339c1c749f20505c610b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:32:49 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:32:49.299095556Z" level=info msg="ignoring event" container=f26354f5f2612ee52a84e0395d51d1d40cb0535eb28d087d27513b6fdc0581b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:02.871133153Z" level=info msg="ignoring event" container=6c90c6d6cdede439bc0501d8c92a5ba905752207e4410c47fa473caef4fa72c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:03 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:03.273150761Z" level=info msg="ignoring event" container=d1c9b97a103d0e917b21d688e7d83865268c90c17ee5a6fe9926f50ab1f876c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:04 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:04.506428541Z" level=info msg="ignoring event" container=b2eaa55b97bb5abc3ba8fa608a93b20da8d974ba3df8a9bb015395e8f3f42faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:05 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:05.039879620Z" level=info msg="ignoring event" container=338934ac4c29442fa6a13ee0b6e060c4d094dfe5c9a3060af0fc21719a7f2606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:13 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:13.802442667Z" level=info msg="ignoring event" container=5b086069fc969e328938c2914abf4f6c1aa4644eb5d9d3769f8d548a66bc1a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:14 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:14.011546240Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	Mar 29 17:35:25 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:25.299344638Z" level=info msg="ignoring event" container=6fb97cf86817956f58e8cc81938439e21528b1ee8470e5a909217e423885f7f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:38 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:38.301550959Z" level=info msg="ignoring event" container=e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:55 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:55.299808853Z" level=info msg="ignoring event" container=341c5acd996e57bc6e8446f5d735f2f4d24e0e32d96e4e5558011bebd241eed6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:36:42 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:36:42.304517141Z" level=info msg="ignoring event" container=5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:38:09 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:38:09.301689066Z" level=info msg="ignoring event" container=b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID
	b0ab6d4b86f6f       e1482a24335a6                                                                                         About a minute ago   Exited              kubernetes-dashboard        5                   5a3c9f51bf72e
	e3c65b1070d0e       6e38f40d628db                                                                                         4 minutes ago        Exited              storage-provisioner         6                   cc5d0baf8413c
	b2eaa55b97bb5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago        Exited              mount-munger                0                   338934ac4c294
	d9045ca0a0691       7801cfc6d5c07                                                                                         4 minutes ago        Running             dashboard-metrics-scraper   0                   662a4b01301cf
	3e75f1ede27a8       mysql@sha256:c8f68301981a7224cc9c063fc7a97b6ef13cfc4142b4871d1a35c95777ce96f4                         4 minutes ago        Running             mysql                       0                   b137d87cb6fa7
	d20cd4783e3a9       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         5 minutes ago        Running             echoserver                  0                   d25d7b875dbd7
	2dd0f2a22875f       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         5 minutes ago        Running             echoserver                  0                   e2603dfd92fce
	e9aa8d5e73b6f       nginx@sha256:db7973cb238c8e8acea5982c1048b5987e9e4da60d20daeef7301757de97357a                         5 minutes ago        Running             nginx                       0                   d84f2425e6a68
	ed2abfdf852e4       a4ca41631cc7a                                                                                         9 minutes ago        Running             coredns                     0                   0df1f00beb0f9
	f2754d5f9fe06       3c53fa8541f95                                                                                         9 minutes ago        Running             kube-proxy                  0                   3bf69ff5fd337
	0fdf829055d35       25f8c7f3da61c                                                                                         10 minutes ago       Running             etcd                        2                   b01c21fe12444
	c4e839ad1beff       884d49d6d8c9f                                                                                         10 minutes ago       Running             kube-scheduler              2                   9abfed6c48dd8
	7001dc05b7a31       b0c9e5e4dbb14                                                                                         10 minutes ago       Running             kube-controller-manager     2                   4fc84999cfae8
	c0967de79e035       3fc1d62d65872                                                                                         10 minutes ago       Running             kube-apiserver              2                   4c49627bee296
	
	* 
	* ==> coredns [ed2abfdf852e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220329171943-564087
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220329171943-564087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=functional-20220329171943-564087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_29_47_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:29:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220329171943-564087
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220329171943-564087
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                624efa2b-b5b3-4a5b-aa55-17f0f305f536
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-5pkpn                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  default                     hello-node-connect-74cf8bc446-j6cdn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  default                     mysql-b87c45988-hmlz9                                       600m (7%!)(MISSING)     700m (8%!)(MISSING)   512Mi (1%!)(MISSING)       700Mi (2%!)(MISSING)     5m11s
	  default                     nginx-svc                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-64897985d-775kd                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     9m59s
	  kube-system                 etcd-functional-20220329171943-564087                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kube-apiserver-functional-20220329171943-564087             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-functional-20220329171943-564087    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-fpn9r                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-functional-20220329171943-564087             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kubernetes-dashboard        dashboard-metrics-scraper-58549894f-pch9z                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kubernetes-dashboard        kubernetes-dashboard-ccd587f44-wwxh6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%!)(MISSING)  700m (8%!)(MISSING)
	  memory             682Mi (2%!)(MISSING)   870Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 9m58s              kube-proxy  
	  Normal  NodeHasSufficientMemory  10m (x6 over 10m)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x6 over 10m)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m                kubelet     Node functional-20220329171943-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10m                kubelet     Node functional-20220329171943-564087 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                10m                kubelet     Node functional-20220329171943-564087 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007853] FS-Cache: O-key=[8] '64fc070000000000'
	[  +0.006301] FS-Cache: N-cookie c=000000009093c4f7 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009333] FS-Cache: N-cookie d=00000000c16fb940 n=00000000c141da8a
	[  +0.007863] FS-Cache: N-key=[8] '64fc070000000000'
	[  +0.008255] FS-Cache: Duplicate cookie detected
	[  +0.005487] FS-Cache: O-cookie c=00000000f44cc384 [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.009515] FS-Cache: O-cookie d=00000000c16fb940 n=00000000037b7c9a
	[  +0.007866] FS-Cache: O-key=[8] '64fc070000000000'
	[  +0.006276] FS-Cache: N-cookie c=0000000059c41939 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009336] FS-Cache: N-cookie d=00000000c16fb940 n=00000000123f1292
	[  +0.007866] FS-Cache: N-key=[8] '64fc070000000000'
	[  +1.066384] FS-Cache: Duplicate cookie detected
	[  +0.004669] FS-Cache: O-cookie c=0000000045133206 [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.008153] FS-Cache: O-cookie d=00000000c16fb940 n=00000000241714ab
	[  +0.006480] FS-Cache: O-key=[8] '63fc070000000000'
	[  +0.004970] FS-Cache: N-cookie c=000000009762aa29 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009375] FS-Cache: N-cookie d=00000000c16fb940 n=000000000fdd68b4
	[  +0.007861] FS-Cache: N-key=[8] '63fc070000000000'
	[  +0.342363] FS-Cache: Duplicate cookie detected
	[  +0.004674] FS-Cache: O-cookie c=00000000ad0570cd [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.008153] FS-Cache: O-cookie d=00000000c16fb940 n=00000000815febdc
	[  +0.006485] FS-Cache: O-key=[8] '66fc070000000000'
	[  +0.004965] FS-Cache: N-cookie c=0000000005b7ac19 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009350] FS-Cache: N-cookie d=00000000c16fb940 n=000000000cec3a16
	[  +0.007854] FS-Cache: N-key=[8] '66fc070000000000'
	
	* 
	* ==> etcd [0fdf829055d3] <==
	* {"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.054Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.054Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329171943-564087 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:29:41.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:29:41.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-03-29T17:34:45.593Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.846819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2022-03-29T17:34:45.593Z","caller":"traceutil/trace.go:171","msg":"trace[871438359] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:677; }","duration":"116.993968ms","start":"2022-03-29T17:34:45.476Z","end":"2022-03-29T17:34:45.593Z","steps":["trace[871438359] 'range keys from in-memory index tree'  (duration: 116.745099ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T17:39:42.447Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":665}
	{"level":"info","ts":"2022-03-29T17:39:42.448Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":665,"took":"615.426µs"}
	
	* 
	* ==> kernel <==
	*  17:39:59 up  2:22,  0 users,  load average: 0.96, 0.51, 0.61
	Linux functional-20220329171943-564087 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c0967de79e03] <==
	* I0329 17:29:44.964300       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 17:29:44.964326       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 17:29:44.969091       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 17:29:44.971906       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 17:29:44.971925       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 17:29:45.303287       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 17:29:45.331174       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 17:29:45.465127       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 17:29:45.469953       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0329 17:29:45.471109       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 17:29:45.476533       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 17:29:46.095632       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 17:29:46.794369       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 17:29:46.849160       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 17:29:46.861047       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 17:29:47.049189       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 17:29:59.844053       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0329 17:29:59.992833       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 17:30:01.180267       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0329 17:34:30.278695       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.99.176.178]
	I0329 17:34:32.174384       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.106.38.92]
	I0329 17:34:41.664305       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.102.234.129]
	I0329 17:34:48.728466       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.98.32.203]
	I0329 17:34:59.658919       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.186.203]
	I0329 17:34:59.671984       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.55.2]
	
	* 
	* ==> kube-controller-manager [7001dc05b7a3] <==
	* I0329 17:34:59.467026       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0329 17:34:59.469479       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0329 17:34:59.469489       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0329 17:34:59.550270       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-pch9z"
	I0329 17:34:59.550354       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-wwxh6"
	I0329 17:35:14.344936       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:29.345469       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:44.345758       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:59.346691       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:14.346846       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:29.347463       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:44.348449       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:59.349319       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:14.350239       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:29.351188       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:44.352062       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:59.352785       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:38:14.353141       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:38:29.353512       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:38:44.353735       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:38:59.354272       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:39:14.355477       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:39:29.356410       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:39:44.357439       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:39:59.358136       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [f2754d5f9fe0] <==
	* I0329 17:30:01.151791       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:30:01.151858       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:30:01.151899       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:30:01.176234       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:30:01.176272       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:30:01.176281       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:30:01.176301       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:30:01.176620       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:30:01.177748       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:30:01.177812       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:30:01.178368       1 config.go:317] "Starting service config controller"
	I0329 17:30:01.178387       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:30:01.278510       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:30:01.278510       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c4e839ad1bef] <==
	* W0329 17:29:44.066919       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.066975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.066981       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0329 17:29:44.067000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0329 17:29:44.067099       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:29:44.067128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:29:44.067265       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 17:29:44.067289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 17:29:44.067355       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:29:44.067376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:29:44.067074       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.067604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.067608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:29:44.067731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0329 17:29:44.067538       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:29:44.067860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:29:44.951148       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.951198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.953002       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:29:44.953027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:29:45.079869       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:29:45.079907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:29:45.144762       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:45.144792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0329 17:29:45.460902       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:19:53 UTC, end at Tue 2022-03-29 17:39:59 UTC. --
	Mar 29 17:38:41 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:41.157966   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:38:48 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:48.157468   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:38:48 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:48.157764   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:38:53 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:53.158347   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:38:53 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:53.158576   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:38:59 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:59.157800   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:38:59 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:59.158104   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:39:06 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:06.157874   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:39:06 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:06.158125   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:39:11 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:11.157630   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:39:11 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:11.157931   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:39:17 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:17.158237   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:39:17 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:17.158440   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:39:23 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:23.158347   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:39:23 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:23.158750   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:39:29 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:29.158289   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:39:29 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:29.158588   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:39:35 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:35.157406   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:39:35 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:35.157826   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:39:40 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:40.157446   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:39:40 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:40.157731   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:39:50 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:50.157797   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:39:50 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:50.158086   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:39:51 functional-20220329171943-564087 kubelet[10814]: I0329 17:39:51.157375   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:39:51 functional-20220329171943-564087 kubelet[10814]: E0329 17:39:51.157601   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	
	* 
	* ==> kubernetes-dashboard [b0ab6d4b86f6] <==
	* 2022/03/29 17:38:09 Using namespace: kubernetes-dashboard
	2022/03/29 17:38:09 Using in-cluster config to connect to apiserver
	2022/03/29 17:38:09 Using secret token for csrf signing
	2022/03/29 17:38:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/03/29 17:38:09 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00041f6a0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00021ef00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00021ef00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x21c
	
	* 
	* ==> storage-provisioner [e3c65b1070d0] <==
	* I0329 17:35:38.285243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0329 17:35:38.286496       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220329171943-564087 -n functional-20220329171943-564087
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220329171943-564087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox-mount
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220329171943-564087 describe pod busybox-mount
helpers_test.go:281: (dbg) kubectl --context functional-20220329171943-564087 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220329171943-564087/192.168.49.2
	Start Time:   Tue, 29 Mar 2022 17:34:53 +0000
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.7
	IPs:
	  IP:  172.17.0.7
	Containers:
	  mount-munger:
	    Container ID:  docker://b2eaa55b97bb5abc3ba8fa608a93b20da8d974ba3df8a9bb015395e8f3f42faa
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 29 Mar 2022 17:35:04 +0000
	      Finished:     Tue, 29 Mar 2022 17:35:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbtnr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pbtnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m7s   default-scheduler  Successfully assigned default/busybox-mount to functional-20220329171943-564087
	  Normal  Pulling    5m6s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 9.661234083s
	  Normal  Created    4m56s  kubelet            Created container mount-munger
	  Normal  Started    4m56s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (302.08s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (232.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [77aadc78-8bda-4817-9c83-e76ccd0bd850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010097199s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220329171943-564087 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220329171943-564087 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
E0329 17:35:41.058247  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329171943-564087 get pvc myclaim -o=json
functional_test_pvc_test.go:93: failed to check storage phase: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:759fd68d-dd66-4bf2-bf27-eddc158aa7ec ResourceVersion:645 Generation:0 CreationTimestamp:2022-03-29 17:34:36 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ClusterName: ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0006fe1e0 VolumeMode:0xc0006fe220 DataSource:nil DataSourceRef:nil} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] ResizeStatus:<nil>}})
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220329171943-564087
helpers_test.go:236: (dbg) docker inspect functional-20220329171943-564087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601",
	        "Created": "2022-03-29T17:19:53.028060184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 589271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:19:53.388807933Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/hostname",
	        "HostsPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/hosts",
	        "LogPath": "/var/lib/docker/containers/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601/15ca43c1287ce50d97e397e27d78758fa2893a52292e7de2c0d58cb46f20a601-json.log",
	        "Name": "/functional-20220329171943-564087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220329171943-564087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220329171943-564087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e-init/diff:/var/lib/docker/overlay2/9db4e23be625e034f4ded606113a10eac42e47ab03824d2ab674189ac3bfe07b/diff:/var/lib/docker/overlay2/23cb119bfb0f25fd9defc73c170f1edc0bcfc13d6d5cd5613108d72d2020b31c/diff:/var/lib/docker/overlay2/bc76d55655624ec99d26daa97a683f1a970449af5a278430e255d62e3f8b7357/diff:/var/lib/docker/overlay2/ec38188e1f99f15e49cbf2bb0c04cafd5ff241ea7966de30f2b4201c74cb77cb/diff:/var/lib/docker/overlay2/a5d5403dacc48240e9b97d1b8e55974405d1cf196bfcfa0ca32548f269cc1071/diff:/var/lib/docker/overlay2/9b4ccea6c0eb5887c76137ed35db5e0e51cf583e7c5034dcee8dd746f9a5c3bb/diff:/var/lib/docker/overlay2/8938344848e3a72fe363a3ed45041a50457e8ce2a391113dd515f7afd6d909db/diff:/var/lib/docker/overlay2/b6696995e5a26e0378be0861a49fb24498de5c915b3c02bd34ae778e05b48a9d/diff:/var/lib/docker/overlay2/f95310f65d1c113884a9ac4dc0f127daf9d1b3f623762106478e4fe41692cc2d/diff:/var/lib/docker/overlay2/30ef7d
70756fc9f43cfd45ede0c78a5dbd376911f1844027d7dd8448f0d1bd2c/diff:/var/lib/docker/overlay2/aeeca576548699f29ecc5f8389942ed3bfde02e1b481e0e8365142a90064496c/diff:/var/lib/docker/overlay2/5ba2587df64129d8cf8c96c14448186757d9b360c9e3101c4a20b1edd728ce18/diff:/var/lib/docker/overlay2/64d1213878e17d1927644c40bb0d52e6a3a124b5e86daa58f166ee0704d9da9b/diff:/var/lib/docker/overlay2/7ac9b531b4439100cfb4789e5009915d72b467705e391e0d197a760783cb4e4b/diff:/var/lib/docker/overlay2/f6f1442868cd491bc73dc995e7c0b552c0d2843d43327267ee3d015edc11da4e/diff:/var/lib/docker/overlay2/c7c6c9113fac60b95369a3e535649a67c14c4c74da4c7de68bd1aaf14bce0ac3/diff:/var/lib/docker/overlay2/9eba2b84f547941ca647ea1c9eff5275fae385f1b800741ed421672c6437487a/diff:/var/lib/docker/overlay2/8bb3fb7770413b61ccdf84f4a5cccb728206fcecd1f006ca906874d3c5d4481c/diff:/var/lib/docker/overlay2/7ebf161ae3775c9e0f6ebe9e26d40e46766d5f3387c2ea279679d585cbd19866/diff:/var/lib/docker/overlay2/4d1064116e64fbf54de0c8ef70255b6fc77b005725e02a52281bfa0e5de5a7af/diff:/var/lib/d
ocker/overlay2/f82ba82619b078a905b7e5a1466fc8ca89d8664fa04dc61cf5914aa0c34ae177/diff:/var/lib/docker/overlay2/728d17980e4c7c100416d2fd1be83673103f271144543fb61798e4a0303c1d63/diff:/var/lib/docker/overlay2/d7e175c39be427bc2372876df06eb27ba2b10462c347d1ee8e43a957642f2ca5/diff:/var/lib/docker/overlay2/1e872f98bd0c0432c85e2812af12d33dcacc384f762347889c846540583137be/diff:/var/lib/docker/overlay2/f5da27e443a249317e2670de2816cbae827a62edb0e4475ac004418a25e279d8/diff:/var/lib/docker/overlay2/33e17a308b62964f37647c1f62c13733476a7eaadb28f29ad1d1f21b5d0456ee/diff:/var/lib/docker/overlay2/6b6bb10e19be67a77e94bd177e583241953840e08b30d68eca16b63e2c5fd574/diff:/var/lib/docker/overlay2/8e061338d4e4cf068f61861fc08144097ee117189101f3a71f361481dc288fd3/diff:/var/lib/docker/overlay2/27d99a6f864614a9dad7efdece7ace23256ff5489d66daed625285168e2fcc48/diff:/var/lib/docker/overlay2/8642d51376c5c35316cb2d9d5832c7382cb5e0d9df1b766f5187ab10eaafb4d6/diff:/var/lib/docker/overlay2/9ffbd3f47292209200a9ab357ba5f68beb15c82f2511804d74dcf2ad3b4
4155f/diff:/var/lib/docker/overlay2/d2512b29dd494ed5dc05b52800efe6a97b07803c1d3172d6a9d9b0b45a7e19eb/diff:/var/lib/docker/overlay2/7e87858609885bf7a576966de8888d2db30e18d8b582b6f6434176c59d71cca5/diff:/var/lib/docker/overlay2/54e00a6514941a66517f8aa879166fd5e8660f7ab673e554aa927bfcb19a145d/diff:/var/lib/docker/overlay2/02ced31172683ffa2fe2365aa827ef66d364bd100865b9095680e2c79f2e868e/diff:/var/lib/docker/overlay2/e65eba629c5d8828d9a2c4b08b322edb4b07793e8bfb091b93fd15013209a387/diff:/var/lib/docker/overlay2/3ee0fd224e7a66a3d8cc598c64cdaf0436eab7f466aa34e3406a0058e16a7f30/diff:/var/lib/docker/overlay2/29b13dceeebd7568b56f69e176c7d37f5b88fe4c13065f01a6f3a36606d5b62c/diff:/var/lib/docker/overlay2/b10262d215789890fd0056a6e4ff379df5e663524b5b96d9671e10c54adc5a25/diff:/var/lib/docker/overlay2/a292b90c390a4decbdd1887aa58471b2827752df1ef18358a1fb82fd665de0b4/diff:/var/lib/docker/overlay2/fbac86c28573a8fd7399f9fd0a51ebb8eef8158b8264c242aa16e16f6227522f/diff:/var/lib/docker/overlay2/b0ddb339636d56ff9132bc75064a21216c2e71
f3b3b53d4a39f9fe66133219c2/diff:/var/lib/docker/overlay2/9e52af85e3d331425d5757a9bde2ace3e5e12622a0d748e6559c2a74907adaa1/diff:/var/lib/docker/overlay2/e856b1e5a3fe78b31306313bdf9bc42d7b1f45dc864587f3ce5dfd3793cb96d3/diff:/var/lib/docker/overlay2/1fbed3ccb397ff1873888dc253845b880a4d30dda3b181220402f7592d8a3ad7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfe2c2380f26a047749fc3e1df2ea0b8c438675d4fd3ac822ad903a4380e128e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220329171943-564087",
	                "Source": "/var/lib/docker/volumes/functional-20220329171943-564087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220329171943-564087",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220329171943-564087",
	                "name.minikube.sigs.k8s.io": "functional-20220329171943-564087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7d36d2cb4b207d354d3edeb8bd57ffddaa9cea8689c8c63eb17908564341080",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49463"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49462"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49461"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7d36d2cb4b2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220329171943-564087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "15ca43c1287c",
	                        "functional-20220329171943-564087"
	                    ],
	                    "NetworkID": "0f425cae9470b2772507f4689a703cb8884a8795e13bf34ba02a82ab3aa92e69",
	                    "EndpointID": "ad09f7d8cfe5db54b7232c5900e56038d6cf0563cc03270369e9c0b3f05bf5c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220329171943-564087 -n functional-20220329171943-564087
helpers_test.go:245: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 logs -n 25: (1.072331699s)
helpers_test.go:253: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                            |             Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:52 UTC | Tue, 29 Mar 2022 17:34:52 UTC |
	|         | ssh cat                                                   |                                  |         |         |                               |                               |
	|         | /mount-9p/test-1648575291016083056                        |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:52 UTC | Tue, 29 Mar 2022 17:34:53 UTC |
	|         | service list                                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:53 UTC | Tue, 29 Mar 2022 17:34:54 UTC |
	|         | service --namespace=default                               |                                  |         |         |                               |                               |
	|         | --https --url hello-node                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:54 UTC | Tue, 29 Mar 2022 17:34:55 UTC |
	|         | service hello-node --url                                  |                                  |         |         |                               |                               |
	|         | --format={{.IP}}                                          |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:34:55 UTC | Tue, 29 Mar 2022 17:34:56 UTC |
	|         | service hello-node --url                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:05 UTC | Tue, 29 Mar 2022 17:35:05 UTC |
	|         | ssh stat                                                  |                                  |         |         |                               |                               |
	|         | /mount-9p/created-by-test                                 |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:05 UTC | Tue, 29 Mar 2022 17:35:05 UTC |
	|         | ssh stat                                                  |                                  |         |         |                               |                               |
	|         | /mount-9p/created-by-pod                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:06 UTC | Tue, 29 Mar 2022 17:35:06 UTC |
	|         | ssh sudo umount -f /mount-9p                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:07 UTC | Tue, 29 Mar 2022 17:35:07 UTC |
	|         | ssh findmnt -T /mount-9p | grep                           |                                  |         |         |                               |                               |
	|         | 9p                                                        |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:07 UTC | Tue, 29 Mar 2022 17:35:07 UTC |
	|         | ssh -- ls -la /mount-9p                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:08 UTC | Tue, 29 Mar 2022 17:35:08 UTC |
	|         | cp testdata/cp-test.txt                                   |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:08 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | ssh -n                                                    |                                  |         |         |                               |                               |
	|         | functional-20220329171943-564087                          |                                  |         |         |                               |                               |
	|         | sudo cat                                                  |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087 cp                       | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | functional-20220329171943-564087:/home/docker/cp-test.txt |                                  |         |         |                               |                               |
	|         | /tmp/mk_test2393693594/cp-test.txt                        |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | ssh -n                                                    |                                  |         |         |                               |                               |
	|         | functional-20220329171943-564087                          |                                  |         |         |                               |                               |
	|         | sudo cat                                                  |                                  |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:09 UTC |
	|         | version --short                                           |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:09 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | version -o=json --components                              |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:10 UTC | Tue, 29 Mar 2022 17:35:10 UTC |
	|         | update-context --alsologtostderr                          |                                  |         |         |                               |                               |
	|         | -v=2                                                      |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:11 UTC |
	|         | image ls --format short                                   |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:11 UTC |
	|         | image ls --format yaml                                    |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087 image build -t           | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:11 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | localhost/my-image:functional-20220329171943-564087       |                                  |         |         |                               |                               |
	|         | testdata/build                                            |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls --format json                                    |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls                                                  |                                  |         |         |                               |                               |
	| -p      | functional-20220329171943-564087                          | functional-20220329171943-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:14 UTC | Tue, 29 Mar 2022 17:35:14 UTC |
	|         | image ls --format table                                   |                                  |         |         |                               |                               |
	|---------|-----------------------------------------------------------|----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:34:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:34:57.797333  611076 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:34:57.798018  611076 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.798033  611076 out.go:310] Setting ErrFile to fd 2...
	I0329 17:34:57.798038  611076 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.798251  611076 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:34:57.798630  611076 out.go:304] Setting JSON to false
	I0329 17:34:57.800308  611076 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8251,"bootTime":1648567047,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:34:57.800406  611076 start.go:124] virtualization: kvm guest
	I0329 17:34:57.802464  611076 out.go:176] * [functional-20220329171943-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:34:57.804088  611076 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:57.805283  611076 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:57.806483  611076 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:34:57.807703  611076 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:34:57.808872  611076 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:34:57.809354  611076 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:57.809794  611076 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:57.849907  611076 docker.go:137] docker version: linux-20.10.14
	I0329 17:34:57.850022  611076 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:57.946030  611076 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:57.884553476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:57.946166  611076 docker.go:254] overlay module found
	I0329 17:34:57.948169  611076 out.go:176] * Using the docker driver based on existing profile
	I0329 17:34:57.948212  611076 start.go:283] selected driver: docker
	I0329 17:34:57.948220  611076 start.go:800] validating driver "docker" against &{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:57.948377  611076 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 17:34:57.948800  611076 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:58.047988  611076 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:57.98538321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:58.048715  611076 cni.go:93] Creating CNI manager for ""
	I0329 17:34:58.048746  611076 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:34:58.048760  611076 start_flags.go:306] config:
	{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:fa
lse volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:19:53 UTC, end at Tue 2022-03-29 17:38:23 UTC. --
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.302786741Z" level=info msg="ignoring event" container=8fec37f9f6d6fefdc9dfa2f5f5d4fadad89668edce59bcfd2de9eafa6e0dbc3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.479818674Z" level=info msg="ignoring event" container=2bcded85e6133f08c6b9d3edd7fc3ac1fa56bfe038e90b5d795fb8f2d03d9643 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.598403679Z" level=info msg="ignoring event" container=0096f55eda7a59b969520aad33f95ef2f4e88449065f34aea426d256f6710870 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.714223542Z" level=info msg="ignoring event" container=5968dceb92dd3a1a2df9ee2bea931e1c9226f400d793729ba9a9c9d772fa0041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.830938882Z" level=info msg="ignoring event" container=765e783db02ae8f5b3727b2fc3c7cff355b17e5aabf9c7df6665716eee5f28ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:29:36 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:29:36.946671748Z" level=info msg="ignoring event" container=b33e578e2672074fffe15df6768bea414c03a08387deb2dab2ad70f4c9fc29ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:02.723431289Z" level=info msg="ignoring event" container=15e4bda80e7be6b0247b0ac5f1b80113d5ac694fa7d91d40f17a7402bae7d6af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:02.894166771Z" level=info msg="ignoring event" container=4bf5fc3e4e55c47cda60d241917b28576a8a0491fb053877f25751c540d2d340 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:11 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:11.371464080Z" level=info msg="ignoring event" container=7b4349a9f03d98e6a09125c57223ee392fa9351166515598c5af3fdb7aa6a145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:11 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:11.431280198Z" level=info msg="ignoring event" container=448d20dc3f6b953a8980171f0d104530d77b515432e9b279770ff4346946450c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:19 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:19.290000428Z" level=info msg="ignoring event" container=1023acaf289a43a485e447984175897c7495df26906277ddaaef1ec5a2d07535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:30:44 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:30:44.285316359Z" level=info msg="ignoring event" container=40db2146ec0aeb8ff89fc7908270a6a2d2c6bc93c7849a5f48436f72c53850ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:31:25 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:31:25.293278927Z" level=info msg="ignoring event" container=bbf0ed65b38054208296636ac3b226a5e5a4c668ff339c1c749f20505c610b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:32:49 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:32:49.299095556Z" level=info msg="ignoring event" container=f26354f5f2612ee52a84e0395d51d1d40cb0535eb28d087d27513b6fdc0581b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:02 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:02.871133153Z" level=info msg="ignoring event" container=6c90c6d6cdede439bc0501d8c92a5ba905752207e4410c47fa473caef4fa72c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:03 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:03.273150761Z" level=info msg="ignoring event" container=d1c9b97a103d0e917b21d688e7d83865268c90c17ee5a6fe9926f50ab1f876c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:04 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:04.506428541Z" level=info msg="ignoring event" container=b2eaa55b97bb5abc3ba8fa608a93b20da8d974ba3df8a9bb015395e8f3f42faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:05 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:05.039879620Z" level=info msg="ignoring event" container=338934ac4c29442fa6a13ee0b6e060c4d094dfe5c9a3060af0fc21719a7f2606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:13 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:13.802442667Z" level=info msg="ignoring event" container=5b086069fc969e328938c2914abf4f6c1aa4644eb5d9d3769f8d548a66bc1a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:14 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:14.011546240Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	Mar 29 17:35:25 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:25.299344638Z" level=info msg="ignoring event" container=6fb97cf86817956f58e8cc81938439e21528b1ee8470e5a909217e423885f7f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:38 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:38.301550959Z" level=info msg="ignoring event" container=e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:55 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:35:55.299808853Z" level=info msg="ignoring event" container=341c5acd996e57bc6e8446f5d735f2f4d24e0e32d96e4e5558011bebd241eed6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:36:42 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:36:42.304517141Z" level=info msg="ignoring event" container=5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:38:09 functional-20220329171943-564087 dockerd[461]: time="2022-03-29T17:38:09.301689066Z" level=info msg="ignoring event" container=b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID
	b0ab6d4b86f6f       e1482a24335a6                                                                                         14 seconds ago      Exited              kubernetes-dashboard        5                   5a3c9f51bf72e
	e3c65b1070d0e       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner         6                   cc5d0baf8413c
	b2eaa55b97bb5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger                0                   338934ac4c294
	d9045ca0a0691       7801cfc6d5c07                                                                                         3 minutes ago       Running             dashboard-metrics-scraper   0                   662a4b01301cf
	3e75f1ede27a8       mysql@sha256:c8f68301981a7224cc9c063fc7a97b6ef13cfc4142b4871d1a35c95777ce96f4                         3 minutes ago       Running             mysql                       0                   b137d87cb6fa7
	d20cd4783e3a9       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         3 minutes ago       Running             echoserver                  0                   d25d7b875dbd7
	2dd0f2a22875f       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         3 minutes ago       Running             echoserver                  0                   e2603dfd92fce
	e9aa8d5e73b6f       nginx@sha256:db7973cb238c8e8acea5982c1048b5987e9e4da60d20daeef7301757de97357a                         3 minutes ago       Running             nginx                       0                   d84f2425e6a68
	ed2abfdf852e4       a4ca41631cc7a                                                                                         8 minutes ago       Running             coredns                     0                   0df1f00beb0f9
	f2754d5f9fe06       3c53fa8541f95                                                                                         8 minutes ago       Running             kube-proxy                  0                   3bf69ff5fd337
	0fdf829055d35       25f8c7f3da61c                                                                                         8 minutes ago       Running             etcd                        2                   b01c21fe12444
	c4e839ad1beff       884d49d6d8c9f                                                                                         8 minutes ago       Running             kube-scheduler              2                   9abfed6c48dd8
	7001dc05b7a31       b0c9e5e4dbb14                                                                                         8 minutes ago       Running             kube-controller-manager     2                   4fc84999cfae8
	c0967de79e035       3fc1d62d65872                                                                                         8 minutes ago       Running             kube-apiserver              2                   4c49627bee296
	
	* 
	* ==> coredns [ed2abfdf852e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220329171943-564087
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220329171943-564087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=functional-20220329171943-564087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_29_47_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:29:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220329171943-564087
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:38:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:35:24 +0000   Tue, 29 Mar 2022 17:29:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220329171943-564087
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                624efa2b-b5b3-4a5b-aa55-17f0f305f536
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-5pkpn                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  default                     hello-node-connect-74cf8bc446-j6cdn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  default                     mysql-b87c45988-hmlz9                                       600m (7%!)(MISSING)     700m (8%!)(MISSING)   512Mi (1%!)(MISSING)       700Mi (2%!)(MISSING)     3m35s
	  default                     nginx-svc                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 coredns-64897985d-775kd                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m23s
	  kube-system                 etcd-functional-20220329171943-564087                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-apiserver-functional-20220329171943-564087             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-controller-manager-functional-20220329171943-564087    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-proxy-fpn9r                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-scheduler-functional-20220329171943-564087             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-58549894f-pch9z                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kubernetes-dashboard        kubernetes-dashboard-ccd587f44-wwxh6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%!)(MISSING)  700m (8%!)(MISSING)
	  memory             682Mi (2%!)(MISSING)   870Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m22s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  8m44s (x6 over 8m44s)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x6 over 8m44s)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x5 over 8m44s)  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m44s                  kubelet     Starting kubelet.
	  Normal  Starting                 8m36s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m36s                  kubelet     Node functional-20220329171943-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s                  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8m36s                  kubelet     Node functional-20220329171943-564087 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m36s                  kubelet     Node functional-20220329171943-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m26s                  kubelet     Node functional-20220329171943-564087 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007853] FS-Cache: O-key=[8] '64fc070000000000'
	[  +0.006301] FS-Cache: N-cookie c=000000009093c4f7 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009333] FS-Cache: N-cookie d=00000000c16fb940 n=00000000c141da8a
	[  +0.007863] FS-Cache: N-key=[8] '64fc070000000000'
	[  +0.008255] FS-Cache: Duplicate cookie detected
	[  +0.005487] FS-Cache: O-cookie c=00000000f44cc384 [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.009515] FS-Cache: O-cookie d=00000000c16fb940 n=00000000037b7c9a
	[  +0.007866] FS-Cache: O-key=[8] '64fc070000000000'
	[  +0.006276] FS-Cache: N-cookie c=0000000059c41939 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009336] FS-Cache: N-cookie d=00000000c16fb940 n=00000000123f1292
	[  +0.007866] FS-Cache: N-key=[8] '64fc070000000000'
	[  +1.066384] FS-Cache: Duplicate cookie detected
	[  +0.004669] FS-Cache: O-cookie c=0000000045133206 [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.008153] FS-Cache: O-cookie d=00000000c16fb940 n=00000000241714ab
	[  +0.006480] FS-Cache: O-key=[8] '63fc070000000000'
	[  +0.004970] FS-Cache: N-cookie c=000000009762aa29 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009375] FS-Cache: N-cookie d=00000000c16fb940 n=000000000fdd68b4
	[  +0.007861] FS-Cache: N-key=[8] '63fc070000000000'
	[  +0.342363] FS-Cache: Duplicate cookie detected
	[  +0.004674] FS-Cache: O-cookie c=00000000ad0570cd [p=00000000e9f3ac79 fl=226 nc=0 na=1]
	[  +0.008153] FS-Cache: O-cookie d=00000000c16fb940 n=00000000815febdc
	[  +0.006485] FS-Cache: O-key=[8] '66fc070000000000'
	[  +0.004965] FS-Cache: N-cookie c=0000000005b7ac19 [p=00000000e9f3ac79 fl=2 nc=0 na=1]
	[  +0.009350] FS-Cache: N-cookie d=00000000c16fb940 n=000000000cec3a16
	[  +0.007854] FS-Cache: N-key=[8] '66fc070000000000'
	
	* 
	* ==> etcd [0fdf829055d3] <==
	* {"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-29T17:29:40.863Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:29:41.054Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.054Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329171943-564087 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:29:41.055Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:29:41.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:29:41.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-03-29T17:34:45.593Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.846819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2022-03-29T17:34:45.593Z","caller":"traceutil/trace.go:171","msg":"trace[871438359] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:677; }","duration":"116.993968ms","start":"2022-03-29T17:34:45.476Z","end":"2022-03-29T17:34:45.593Z","steps":["trace[871438359] 'range keys from in-memory index tree'  (duration: 116.745099ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:38:23 up  2:20,  0 users,  load average: 0.21, 0.36, 0.58
	Linux functional-20220329171943-564087 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c0967de79e03] <==
	* I0329 17:29:44.964300       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 17:29:44.964326       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 17:29:44.969091       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 17:29:44.971906       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 17:29:44.971925       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 17:29:45.303287       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 17:29:45.331174       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 17:29:45.465127       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 17:29:45.469953       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0329 17:29:45.471109       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 17:29:45.476533       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 17:29:46.095632       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 17:29:46.794369       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 17:29:46.849160       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 17:29:46.861047       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 17:29:47.049189       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 17:29:59.844053       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0329 17:29:59.992833       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 17:30:01.180267       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0329 17:34:30.278695       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.99.176.178]
	I0329 17:34:32.174384       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.106.38.92]
	I0329 17:34:41.664305       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.102.234.129]
	I0329 17:34:48.728466       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.98.32.203]
	I0329 17:34:59.658919       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.186.203]
	I0329 17:34:59.671984       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.55.2]
	
	* 
	* ==> kube-controller-manager [7001dc05b7a3] <==
	* I0329 17:34:59.456616       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0329 17:34:59.458195       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0329 17:34:59.460206       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0329 17:34:59.460269       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0329 17:34:59.462783       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0329 17:34:59.462786       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0329 17:34:59.466970       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0329 17:34:59.467026       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0329 17:34:59.469479       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0329 17:34:59.469489       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0329 17:34:59.550270       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-pch9z"
	I0329 17:34:59.550354       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-wwxh6"
	I0329 17:35:14.344936       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:29.345469       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:44.345758       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:35:59.346691       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:14.346846       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:29.347463       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:44.348449       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:36:59.349319       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:14.350239       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:29.351188       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:44.352062       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:37:59.352785       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:38:14.353141       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [f2754d5f9fe0] <==
	* I0329 17:30:01.151791       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:30:01.151858       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:30:01.151899       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:30:01.176234       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:30:01.176272       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:30:01.176281       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:30:01.176301       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:30:01.176620       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:30:01.177748       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:30:01.177812       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:30:01.178368       1 config.go:317] "Starting service config controller"
	I0329 17:30:01.178387       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:30:01.278510       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:30:01.278510       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c4e839ad1bef] <==
	* W0329 17:29:44.066919       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.066975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.066981       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0329 17:29:44.067000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0329 17:29:44.067099       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:29:44.067128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:29:44.067265       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 17:29:44.067289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 17:29:44.067355       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:29:44.067376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:29:44.067074       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.067604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.067608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:29:44.067731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0329 17:29:44.067538       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:29:44.067860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:29:44.951148       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:44.951198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:29:44.953002       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:29:44.953027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:29:45.079869       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:29:45.079907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:29:45.144762       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:29:45.144792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0329 17:29:45.460902       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:19:53 UTC, end at Tue 2022-03-29 17:38:23 UTC. --
	Mar 29 17:37:22 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:22.158537   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:37:29 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:29.158094   10814 scope.go:110] "RemoveContainer" containerID="5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf"
	Mar 29 17:37:29 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:29.158347   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:37:33 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:33.158120   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:37:33 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:33.158425   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:37:40 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:40.157736   10814 scope.go:110] "RemoveContainer" containerID="5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf"
	Mar 29 17:37:40 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:40.158020   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:37:47 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:47.157494   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:37:47 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:47.157697   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:37:55 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:55.157350   10814 scope.go:110] "RemoveContainer" containerID="5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf"
	Mar 29 17:37:55 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:55.157630   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:37:59 functional-20220329171943-564087 kubelet[10814]: I0329 17:37:59.157620   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:37:59 functional-20220329171943-564087 kubelet[10814]: E0329 17:37:59.157849   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:38:09 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:09.157735   10814 scope.go:110] "RemoveContainer" containerID="5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf"
	Mar 29 17:38:09 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:09.336746   10814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6 through plugin: invalid network status for"
	Mar 29 17:38:09 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:09.341230   10814 scope.go:110] "RemoveContainer" containerID="5ece21b5ad77067a75d46ed1a1629eff5688d1045ac6f49fb3b60c8b538b0cbf"
	Mar 29 17:38:09 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:09.341537   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:38:09 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:09.341899   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:38:10 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:10.348790   10814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6 through plugin: invalid network status for"
	Mar 29 17:38:10 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:10.351740   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:38:10 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:10.352010   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	Mar 29 17:38:13 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:13.158207   10814 scope.go:110] "RemoveContainer" containerID="e3c65b1070d0ed5c42917057f6281d7b0ba3710fa98da4af1b2971d96041af66"
	Mar 29 17:38:13 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:13.158421   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(77aadc78-8bda-4817-9c83-e76ccd0bd850)\"" pod="kube-system/storage-provisioner" podUID=77aadc78-8bda-4817-9c83-e76ccd0bd850
	Mar 29 17:38:22 functional-20220329171943-564087 kubelet[10814]: I0329 17:38:22.158111   10814 scope.go:110] "RemoveContainer" containerID="b0ab6d4b86f6ff9bd6d9626badcee2d4350f65d44023eb1d45ce8ddf6c5b0e84"
	Mar 29 17:38:22 functional-20220329171943-564087 kubelet[10814]: E0329 17:38:22.158400   10814 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccd587f44-wwxh6_kubernetes-dashboard(6a0fb501-3bcb-471e-b8d5-8223739a3172)\"" pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-wwxh6" podUID=6a0fb501-3bcb-471e-b8d5-8223739a3172
	
	* 
	* ==> kubernetes-dashboard [b0ab6d4b86f6] <==
	* 2022/03/29 17:38:09 Using namespace: kubernetes-dashboard
	2022/03/29 17:38:09 Using in-cluster config to connect to apiserver
	2022/03/29 17:38:09 Using secret token for csrf signing
	2022/03/29 17:38:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/03/29 17:38:09 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00041f6a0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00021ef00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00021ef00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x21c
	
	* 
	* ==> storage-provisioner [e3c65b1070d0] <==
	* I0329 17:35:38.285243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0329 17:35:38.286496       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220329171943-564087 -n functional-20220329171943-564087
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220329171943-564087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox-mount
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220329171943-564087 describe pod busybox-mount
helpers_test.go:281: (dbg) kubectl --context functional-20220329171943-564087 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220329171943-564087/192.168.49.2
	Start Time:   Tue, 29 Mar 2022 17:34:53 +0000
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.7
	IPs:
	  IP:  172.17.0.7
	Containers:
	  mount-munger:
	    Container ID:  docker://b2eaa55b97bb5abc3ba8fa608a93b20da8d974ba3df8a9bb015395e8f3f42faa
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 29 Mar 2022 17:35:04 +0000
	      Finished:     Tue, 29 Mar 2022 17:35:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbtnr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pbtnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m31s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220329171943-564087
	  Normal  Pulling    3m30s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m20s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 9.661234083s
	  Normal  Created    3m20s  kubelet            Created container mount-munger
	  Normal  Started    3m20s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
E0329 17:39:18.011547  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (232.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (367.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- rollout status deployment/busybox: (2.871657298s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.io
E0329 17:46:58.121103  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:47:13.930114  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:47:39.082262  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
multinode_test.go:517: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.io: exit status 1 (1m0.237757228s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:519: Pod busybox-7978565885-bgzlj could not resolve 'kubernetes.io': exit status 1
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.io: exit status 1 (1m0.244689038s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:519: Pod busybox-7978565885-cbpdd could not resolve 'kubernetes.io': exit status 1
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.default
E0329 17:49:01.003196  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:49:18.010832  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:49:30.085231  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
multinode_test.go:527: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.default: exit status 1 (1m0.235724792s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:529: Pod busybox-7978565885-bgzlj could not resolve 'kubernetes.default': exit status 1
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.default
E0329 17:49:57.770935  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
multinode_test.go:527: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.default: exit status 1 (1m0.230344013s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:529: Pod busybox-7978565885-cbpdd could not resolve 'kubernetes.default': exit status 1
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.default.svc.cluster.local
E0329 17:51:17.161554  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:51:44.844453  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
multinode_test.go:535: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.230782602s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:537: Pod busybox-7978565885-bgzlj could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.default.svc.cluster.local
E0329 17:52:21.059237  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
multinode_test.go:535: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.341291928s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:537: Pod busybox-7978565885-cbpdd could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20220329174520-564087
helpers_test.go:236: (dbg) docker inspect multinode-20220329174520-564087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e",
	        "Created": "2022-03-29T17:45:29.644975292Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 653073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:45:29.996104263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/hosts",
	        "LogPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e-json.log",
	        "Name": "/multinode-20220329174520-564087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220329174520-564087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220329174520-564087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4-init/diff:/var/lib/docker/overlay2/9db4e23be625e034f4ded606113a10eac42e47ab03824d2ab674189ac3bfe07b/diff:/var/lib/docker/overlay2/23cb119bfb0f25fd9defc73c170f1edc0bcfc13d6d5cd5613108d72d2020b31c/diff:/var/lib/docker/overlay2/bc76d55655624ec99d26daa97a683f1a970449af5a278430e255d62e3f8b7357/diff:/var/lib/docker/overlay2/ec38188e1f99f15e49cbf2bb0c04cafd5ff241ea7966de30f2b4201c74cb77cb/diff:/var/lib/docker/overlay2/a5d5403dacc48240e9b97d1b8e55974405d1cf196bfcfa0ca32548f269cc1071/diff:/var/lib/docker/overlay2/9b4ccea6c0eb5887c76137ed35db5e0e51cf583e7c5034dcee8dd746f9a5c3bb/diff:/var/lib/docker/overlay2/8938344848e3a72fe363a3ed45041a50457e8ce2a391113dd515f7afd6d909db/diff:/var/lib/docker/overlay2/b6696995e5a26e0378be0861a49fb24498de5c915b3c02bd34ae778e05b48a9d/diff:/var/lib/docker/overlay2/f95310f65d1c113884a9ac4dc0f127daf9d1b3f623762106478e4fe41692cc2d/diff:/var/lib/docker/overlay2/30ef7d
70756fc9f43cfd45ede0c78a5dbd376911f1844027d7dd8448f0d1bd2c/diff:/var/lib/docker/overlay2/aeeca576548699f29ecc5f8389942ed3bfde02e1b481e0e8365142a90064496c/diff:/var/lib/docker/overlay2/5ba2587df64129d8cf8c96c14448186757d9b360c9e3101c4a20b1edd728ce18/diff:/var/lib/docker/overlay2/64d1213878e17d1927644c40bb0d52e6a3a124b5e86daa58f166ee0704d9da9b/diff:/var/lib/docker/overlay2/7ac9b531b4439100cfb4789e5009915d72b467705e391e0d197a760783cb4e4b/diff:/var/lib/docker/overlay2/f6f1442868cd491bc73dc995e7c0b552c0d2843d43327267ee3d015edc11da4e/diff:/var/lib/docker/overlay2/c7c6c9113fac60b95369a3e535649a67c14c4c74da4c7de68bd1aaf14bce0ac3/diff:/var/lib/docker/overlay2/9eba2b84f547941ca647ea1c9eff5275fae385f1b800741ed421672c6437487a/diff:/var/lib/docker/overlay2/8bb3fb7770413b61ccdf84f4a5cccb728206fcecd1f006ca906874d3c5d4481c/diff:/var/lib/docker/overlay2/7ebf161ae3775c9e0f6ebe9e26d40e46766d5f3387c2ea279679d585cbd19866/diff:/var/lib/docker/overlay2/4d1064116e64fbf54de0c8ef70255b6fc77b005725e02a52281bfa0e5de5a7af/diff:/var/lib/d
ocker/overlay2/f82ba82619b078a905b7e5a1466fc8ca89d8664fa04dc61cf5914aa0c34ae177/diff:/var/lib/docker/overlay2/728d17980e4c7c100416d2fd1be83673103f271144543fb61798e4a0303c1d63/diff:/var/lib/docker/overlay2/d7e175c39be427bc2372876df06eb27ba2b10462c347d1ee8e43a957642f2ca5/diff:/var/lib/docker/overlay2/1e872f98bd0c0432c85e2812af12d33dcacc384f762347889c846540583137be/diff:/var/lib/docker/overlay2/f5da27e443a249317e2670de2816cbae827a62edb0e4475ac004418a25e279d8/diff:/var/lib/docker/overlay2/33e17a308b62964f37647c1f62c13733476a7eaadb28f29ad1d1f21b5d0456ee/diff:/var/lib/docker/overlay2/6b6bb10e19be67a77e94bd177e583241953840e08b30d68eca16b63e2c5fd574/diff:/var/lib/docker/overlay2/8e061338d4e4cf068f61861fc08144097ee117189101f3a71f361481dc288fd3/diff:/var/lib/docker/overlay2/27d99a6f864614a9dad7efdece7ace23256ff5489d66daed625285168e2fcc48/diff:/var/lib/docker/overlay2/8642d51376c5c35316cb2d9d5832c7382cb5e0d9df1b766f5187ab10eaafb4d6/diff:/var/lib/docker/overlay2/9ffbd3f47292209200a9ab357ba5f68beb15c82f2511804d74dcf2ad3b4
4155f/diff:/var/lib/docker/overlay2/d2512b29dd494ed5dc05b52800efe6a97b07803c1d3172d6a9d9b0b45a7e19eb/diff:/var/lib/docker/overlay2/7e87858609885bf7a576966de8888d2db30e18d8b582b6f6434176c59d71cca5/diff:/var/lib/docker/overlay2/54e00a6514941a66517f8aa879166fd5e8660f7ab673e554aa927bfcb19a145d/diff:/var/lib/docker/overlay2/02ced31172683ffa2fe2365aa827ef66d364bd100865b9095680e2c79f2e868e/diff:/var/lib/docker/overlay2/e65eba629c5d8828d9a2c4b08b322edb4b07793e8bfb091b93fd15013209a387/diff:/var/lib/docker/overlay2/3ee0fd224e7a66a3d8cc598c64cdaf0436eab7f466aa34e3406a0058e16a7f30/diff:/var/lib/docker/overlay2/29b13dceeebd7568b56f69e176c7d37f5b88fe4c13065f01a6f3a36606d5b62c/diff:/var/lib/docker/overlay2/b10262d215789890fd0056a6e4ff379df5e663524b5b96d9671e10c54adc5a25/diff:/var/lib/docker/overlay2/a292b90c390a4decbdd1887aa58471b2827752df1ef18358a1fb82fd665de0b4/diff:/var/lib/docker/overlay2/fbac86c28573a8fd7399f9fd0a51ebb8eef8158b8264c242aa16e16f6227522f/diff:/var/lib/docker/overlay2/b0ddb339636d56ff9132bc75064a21216c2e71
f3b3b53d4a39f9fe66133219c2/diff:/var/lib/docker/overlay2/9e52af85e3d331425d5757a9bde2ace3e5e12622a0d748e6559c2a74907adaa1/diff:/var/lib/docker/overlay2/e856b1e5a3fe78b31306313bdf9bc42d7b1f45dc864587f3ce5dfd3793cb96d3/diff:/var/lib/docker/overlay2/1fbed3ccb397ff1873888dc253845b880a4d30dda3b181220402f7592d8a3ad7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20220329174520-564087",
	                "Source": "/var/lib/docker/volumes/multinode-20220329174520-564087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220329174520-564087",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220329174520-564087",
	                "name.minikube.sigs.k8s.io": "multinode-20220329174520-564087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfa3887bc3466cbc8d8b255e9ed809e9bd584fbd5da7465eb8488801cc438e51",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49514"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49511"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfa3887bc346",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220329174520-564087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09d1f85080aa",
	                        "multinode-20220329174520-564087"
	                    ],
	                    "NetworkID": "aff26c540dc64674861fa27e2ecf8bdb09cef8a75e776a2ae6774799c98e2445",
	                    "EndpointID": "1d6a16f20312cc311d8f81df64f834ccf84315a0d66f08036bd5e970d53767f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220329174520-564087 -n multinode-20220329174520-564087
helpers_test.go:245: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 logs -n 25: (1.123822676s)
helpers_test.go:253: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | json-output-error-20220329174300-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:00 UTC | Tue, 29 Mar 2022 17:43:00 UTC |
	|         | json-output-error-20220329174300-564087           |                                         |         |         |                               |                               |
	| start   | -p                                                | docker-network-20220329174300-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:00 UTC | Tue, 29 Mar 2022 17:43:28 UTC |
	|         | docker-network-20220329174300-564087              |                                         |         |         |                               |                               |
	|         | --network=                                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | docker-network-20220329174300-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:28 UTC | Tue, 29 Mar 2022 17:43:30 UTC |
	|         | docker-network-20220329174300-564087              |                                         |         |         |                               |                               |
	| start   | -p                                                | docker-network-20220329174330-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:30 UTC | Tue, 29 Mar 2022 17:43:56 UTC |
	|         | docker-network-20220329174330-564087              |                                         |         |         |                               |                               |
	|         | --network=bridge                                  |                                         |         |         |                               |                               |
	| delete  | -p                                                | docker-network-20220329174330-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:56 UTC | Tue, 29 Mar 2022 17:43:58 UTC |
	|         | docker-network-20220329174330-564087              |                                         |         |         |                               |                               |
	| start   | -p                                                | existing-network-20220329174358-564087  | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:59 UTC | Tue, 29 Mar 2022 17:44:24 UTC |
	|         | existing-network-20220329174358-564087            |                                         |         |         |                               |                               |
	|         | --network=existing-network                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | existing-network-20220329174358-564087  | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:24 UTC | Tue, 29 Mar 2022 17:44:27 UTC |
	|         | existing-network-20220329174358-564087            |                                         |         |         |                               |                               |
	| start   | -p                                                | custom-subnet-20220329174427-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:27 UTC | Tue, 29 Mar 2022 17:44:53 UTC |
	|         | custom-subnet-20220329174427-564087               |                                         |         |         |                               |                               |
	|         | --subnet=192.168.60.0/24                          |                                         |         |         |                               |                               |
	| delete  | -p                                                | custom-subnet-20220329174427-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:53 UTC | Tue, 29 Mar 2022 17:44:55 UTC |
	|         | custom-subnet-20220329174427-564087               |                                         |         |         |                               |                               |
	| start   | -p                                                | mount-start-1-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:55 UTC | Tue, 29 Mar 2022 17:45:00 UTC |
	|         | mount-start-1-20220329174455-564087               |                                         |         |         |                               |                               |
	|         | --memory=2048 --mount                             |                                         |         |         |                               |                               |
	|         | --mount-gid 0 --mount-msize 6543                  |                                         |         |         |                               |                               |
	|         | --mount-port 46464 --mount-uid 0                  |                                         |         |         |                               |                               |
	|         | --no-kubernetes --driver=docker                   |                                         |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                         |         |         |                               |                               |
	| -p      | mount-start-1-20220329174455-564087               | mount-start-1-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:01 UTC | Tue, 29 Mar 2022 17:45:01 UTC |
	|         | ssh -- ls /minikube-host                          |                                         |         |         |                               |                               |
	| start   | -p                                                | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:01 UTC | Tue, 29 Mar 2022 17:45:06 UTC |
	|         | mount-start-2-20220329174455-564087               |                                         |         |         |                               |                               |
	|         | --memory=2048 --mount                             |                                         |         |         |                               |                               |
	|         | --mount-gid 0 --mount-msize 6543                  |                                         |         |         |                               |                               |
	|         | --mount-port 46465 --mount-uid 0                  |                                         |         |         |                               |                               |
	|         | --no-kubernetes --driver=docker                   |                                         |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                         |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:07 UTC | Tue, 29 Mar 2022 17:45:07 UTC |
	|         | ssh -- ls /minikube-host                          |                                         |         |         |                               |                               |
	| delete  | -p                                                | mount-start-1-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:07 UTC | Tue, 29 Mar 2022 17:45:09 UTC |
	|         | mount-start-1-20220329174455-564087               |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=5                            |                                         |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:09 UTC | Tue, 29 Mar 2022 17:45:09 UTC |
	|         | ssh -- ls /minikube-host                          |                                         |         |         |                               |                               |
	| stop    | -p                                                | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:09 UTC | Tue, 29 Mar 2022 17:45:11 UTC |
	|         | mount-start-2-20220329174455-564087               |                                         |         |         |                               |                               |
	| start   | -p                                                | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:11 UTC | Tue, 29 Mar 2022 17:45:17 UTC |
	|         | mount-start-2-20220329174455-564087               |                                         |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:18 UTC | Tue, 29 Mar 2022 17:45:18 UTC |
	|         | ssh -- ls /minikube-host                          |                                         |         |         |                               |                               |
	| delete  | -p                                                | mount-start-2-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:18 UTC | Tue, 29 Mar 2022 17:45:20 UTC |
	|         | mount-start-2-20220329174455-564087               |                                         |         |         |                               |                               |
	| delete  | -p                                                | mount-start-1-20220329174455-564087     | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:20 UTC | Tue, 29 Mar 2022 17:45:20 UTC |
	|         | mount-start-1-20220329174455-564087               |                                         |         |         |                               |                               |
	| start   | -p                                                | multinode-20220329174520-564087         | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:20 UTC | Tue, 29 Mar 2022 17:46:45 UTC |
	|         | multinode-20220329174520-564087                   |                                         |         |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                         |         |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                         |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087 -- apply -f    | multinode-20220329174520-564087         | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:46 UTC | Tue, 29 Mar 2022 17:46:46 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                         |         |         |                               |                               |
	| kubectl | -p                                                | multinode-20220329174520-564087         | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:46 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | multinode-20220329174520-564087                   |                                         |         |         |                               |                               |
	|         | -- rollout status                                 |                                         |         |         |                               |                               |
	|         | deployment/busybox                                |                                         |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087                | multinode-20220329174520-564087         | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:49 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | -- get pods -o                                    |                                         |         |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                         |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087                | multinode-20220329174520-564087         | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:49 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | -- get pods -o                                    |                                         |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |         |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:45:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:45:20.331936  652427 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:45:20.332074  652427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:45:20.332084  652427 out.go:310] Setting ErrFile to fd 2...
	I0329 17:45:20.332089  652427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:45:20.332226  652427 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:45:20.332561  652427 out.go:304] Setting JSON to false
	I0329 17:45:20.333822  652427 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8874,"bootTime":1648567047,"procs":516,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:45:20.333890  652427 start.go:124] virtualization: kvm guest
	I0329 17:45:20.336395  652427 out.go:176] * [multinode-20220329174520-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:45:20.337789  652427 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:45:20.336537  652427 notify.go:193] Checking for updates...
	I0329 17:45:20.339121  652427 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:45:20.340388  652427 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:20.341724  652427 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:45:20.342960  652427 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:45:20.343220  652427 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:45:20.381804  652427 docker.go:137] docker version: linux-20.10.14
	I0329 17:45:20.381904  652427 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:45:20.469400  652427 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:45:20.4097232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:45:20.469499  652427 docker.go:254] overlay module found
	I0329 17:45:20.471464  652427 out.go:176] * Using the docker driver based on user configuration
	I0329 17:45:20.471504  652427 start.go:283] selected driver: docker
	I0329 17:45:20.471513  652427 start.go:800] validating driver "docker" against <nil>
	I0329 17:45:20.471540  652427 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 17:45:20.471590  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 17:45:20.471616  652427 out.go:241] ! Your cgroup does not allow setting memory.
	I0329 17:45:20.472872  652427 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 17:45:20.473545  652427 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:45:20.558853  652427 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:45:20.501447069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:45:20.558990  652427 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 17:45:20.559158  652427 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 17:45:20.559179  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:20.559184  652427 cni.go:154] 0 nodes found, recommending kindnet
	I0329 17:45:20.559195  652427 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 17:45:20.559210  652427 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 17:45:20.559220  652427 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0329 17:45:20.559234  652427 start_flags.go:306] config:
	{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:45:20.561307  652427 out.go:176] * Starting control plane node multinode-20220329174520-564087 in cluster multinode-20220329174520-564087
	I0329 17:45:20.561350  652427 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:45:20.562512  652427 out.go:176] * Pulling base image ...
	I0329 17:45:20.562536  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:20.562572  652427 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:45:20.562568  652427 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:45:20.562684  652427 cache.go:57] Caching tarball of preloaded images
	I0329 17:45:20.562956  652427 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 17:45:20.562980  652427 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:45:20.563381  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:45:20.563424  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json: {Name:mk2811d5590202c4e7e5921a7acc1152f1603641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:20.604209  652427 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:45:20.604239  652427 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:45:20.604251  652427 cache.go:208] Successfully downloaded all kic artifacts
	I0329 17:45:20.604324  652427 start.go:348] acquiring machines lock for multinode-20220329174520-564087: {Name:mk4375468e93ff31e49b583a42e4274bca560bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 17:45:20.604454  652427 start.go:352] acquired machines lock for "multinode-20220329174520-564087" in 108.527µs
	I0329 17:45:20.604484  652427 start.go:90] Provisioning new machine with config: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:45:20.604559  652427 start.go:127] createHost starting for "" (driver="docker")
	I0329 17:45:20.606589  652427 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0329 17:45:20.606818  652427 start.go:161] libmachine.API.Create for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:45:20.606851  652427 client.go:168] LocalClient.Create starting
	I0329 17:45:20.606933  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 17:45:20.606962  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:45:20.606982  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:45:20.607038  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 17:45:20.607054  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:45:20.607063  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:45:20.607383  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 17:45:20.639077  652427 cli_runner.go:180] docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 17:45:20.639148  652427 network_create.go:262] running [docker network inspect multinode-20220329174520-564087] to gather additional debugging logs...
	I0329 17:45:20.639168  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087
	W0329 17:45:20.668363  652427 cli_runner.go:180] docker network inspect multinode-20220329174520-564087 returned with exit code 1
	I0329 17:45:20.668421  652427 network_create.go:265] error running [docker network inspect multinode-20220329174520-564087]: docker network inspect multinode-20220329174520-564087: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220329174520-564087
	I0329 17:45:20.668446  652427 network_create.go:267] output of [docker network inspect multinode-20220329174520-564087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220329174520-564087
	
	** /stderr **
	I0329 17:45:20.668496  652427 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:45:20.698410  652427 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003066b8] misses:0}
	I0329 17:45:20.698471  652427 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 17:45:20.698488  652427 network_create.go:114] attempt to create docker network multinode-20220329174520-564087 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 17:45:20.698528  652427 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220329174520-564087
	I0329 17:45:20.760118  652427 network_create.go:98] docker network multinode-20220329174520-564087 192.168.49.0/24 created
	I0329 17:45:20.760156  652427 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20220329174520-564087" container
	I0329 17:45:20.760221  652427 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 17:45:20.790463  652427 cli_runner.go:133] Run: docker volume create multinode-20220329174520-564087 --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --label created_by.minikube.sigs.k8s.io=true
	I0329 17:45:20.821303  652427 oci.go:102] Successfully created a docker volume multinode-20220329174520-564087
	I0329 17:45:20.821379  652427 cli_runner.go:133] Run: docker run --rm --name multinode-20220329174520-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --entrypoint /usr/bin/test -v multinode-20220329174520-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 17:45:21.366294  652427 oci.go:106] Successfully prepared a docker volume multinode-20220329174520-564087
	I0329 17:45:21.366346  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:21.366368  652427 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 17:45:21.366440  652427 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 17:45:29.526467  652427 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.159935577s)
	I0329 17:45:29.526505  652427 kic.go:188] duration metric: took 8.160134 seconds to extract preloaded images to volume
	W0329 17:45:29.526547  652427 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 17:45:29.526557  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 17:45:29.526615  652427 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 17:45:29.614686  652427 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220329174520-564087 --name multinode-20220329174520-564087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220329174520-564087 --network multinode-20220329174520-564087 --ip 192.168.49.2 --volume multinode-20220329174520-564087:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 17:45:30.004820  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Running}}
	I0329 17:45:30.038532  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.070711  652427 cli_runner.go:133] Run: docker exec multinode-20220329174520-564087 stat /var/lib/dpkg/alternatives/iptables
	I0329 17:45:30.131557  652427 oci.go:278] the created container "multinode-20220329174520-564087" has a running status.
	I0329 17:45:30.131603  652427 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa...
	I0329 17:45:30.200220  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0329 17:45:30.200276  652427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 17:45:30.286384  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.324206  652427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 17:45:30.324234  652427 kic_runner.go:114] Args: [docker exec --privileged multinode-20220329174520-564087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 17:45:30.413834  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.447846  652427 machine.go:88] provisioning docker machine ...
	I0329 17:45:30.447885  652427 ubuntu.go:169] provisioning hostname "multinode-20220329174520-564087"
	I0329 17:45:30.447937  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.480443  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:30.480716  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:30.480748  652427 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20220329174520-564087 && echo "multinode-20220329174520-564087" | sudo tee /etc/hostname
	I0329 17:45:30.605009  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20220329174520-564087
	
	I0329 17:45:30.605124  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.636838  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:30.636980  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:30.637014  652427 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220329174520-564087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220329174520-564087/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220329174520-564087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 17:45:30.752777  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 17:45:30.752810  652427 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 17:45:30.752840  652427 ubuntu.go:177] setting up certificates
	I0329 17:45:30.752852  652427 provision.go:83] configureAuth start
	I0329 17:45:30.752909  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:30.783678  652427 provision.go:138] copyHostCerts
	I0329 17:45:30.783733  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:45:30.783769  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 17:45:30.783787  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:45:30.783857  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 17:45:30.783947  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:45:30.783981  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 17:45:30.783992  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:45:30.784030  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 17:45:30.784102  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:45:30.784128  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 17:45:30.784138  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:45:30.784171  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 17:45:30.784232  652427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.multinode-20220329174520-564087 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220329174520-564087]
	I0329 17:45:30.865210  652427 provision.go:172] copyRemoteCerts
	I0329 17:45:30.865293  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 17:45:30.865340  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.896127  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:30.980224  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0329 17:45:30.980325  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0329 17:45:30.997145  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0329 17:45:30.997224  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 17:45:31.014432  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0329 17:45:31.014495  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 17:45:31.031419  652427 provision.go:86] duration metric: configureAuth took 278.554389ms
	I0329 17:45:31.031445  652427 ubuntu.go:193] setting minikube options for container-runtime
	I0329 17:45:31.031603  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:45:31.031650  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.062634  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.062805  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.062825  652427 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 17:45:31.185413  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 17:45:31.185441  652427 ubuntu.go:71] root file system type: overlay
	I0329 17:45:31.185604  652427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 17:45:31.185665  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.217219  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.217380  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.217441  652427 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 17:45:31.341318  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 17:45:31.341404  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.372795  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.372939  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.372957  652427 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 17:45:31.996301  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 17:45:31.334016942 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 17:45:31.996333  652427 machine.go:91] provisioned docker machine in 1.548462042s
	I0329 17:45:31.996344  652427 client.go:171] LocalClient.Create took 11.389481842s
	I0329 17:45:31.996356  652427 start.go:169] duration metric: libmachine.API.Create for "multinode-20220329174520-564087" took 11.389538465s
	I0329 17:45:31.996367  652427 start.go:302] post-start starting for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:45:31.996373  652427 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 17:45:31.996438  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 17:45:31.996488  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.029326  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.116473  652427 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 17:45:32.119015  652427 command_runner.go:130] > NAME="Ubuntu"
	I0329 17:45:32.119035  652427 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0329 17:45:32.119039  652427 command_runner.go:130] > ID=ubuntu
	I0329 17:45:32.119044  652427 command_runner.go:130] > ID_LIKE=debian
	I0329 17:45:32.119048  652427 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0329 17:45:32.119054  652427 command_runner.go:130] > VERSION_ID="20.04"
	I0329 17:45:32.119062  652427 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0329 17:45:32.119069  652427 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0329 17:45:32.119076  652427 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0329 17:45:32.119090  652427 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0329 17:45:32.119101  652427 command_runner.go:130] > VERSION_CODENAME=focal
	I0329 17:45:32.119106  652427 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0329 17:45:32.119200  652427 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 17:45:32.119221  652427 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 17:45:32.119229  652427 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 17:45:32.119235  652427 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 17:45:32.119247  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 17:45:32.119303  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 17:45:32.119365  652427 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 17:45:32.119377  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /etc/ssl/certs/5640872.pem
	I0329 17:45:32.119457  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 17:45:32.125790  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:45:32.142458  652427 start.go:305] post-start completed in 146.075505ms
	I0329 17:45:32.142815  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:32.173629  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:45:32.173868  652427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:45:32.173906  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.204947  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.289421  652427 command_runner.go:130] > 17%!
	(MISSING)I0329 17:45:32.289487  652427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 17:45:32.292925  652427 command_runner.go:130] > 242G
	I0329 17:45:32.293120  652427 start.go:130] duration metric: createHost completed in 11.688551212s
	I0329 17:45:32.293142  652427 start.go:81] releasing machines lock for "multinode-20220329174520-564087", held for 11.688671722s
	I0329 17:45:32.293224  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:32.323412  652427 ssh_runner.go:195] Run: systemctl --version
	I0329 17:45:32.323463  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.323519  652427 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 17:45:32.323576  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.354831  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.355356  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.579669  652427 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.15)
	I0329 17:45:32.579705  652427 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0329 17:45:32.579775  652427 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0329 17:45:32.579792  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 17:45:32.579798  652427 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0329 17:45:32.579807  652427 command_runner.go:130] > <H1>302 Moved</H1>
	I0329 17:45:32.579814  652427 command_runner.go:130] > The document has moved
	I0329 17:45:32.579825  652427 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0329 17:45:32.579832  652427 command_runner.go:130] > </BODY></HTML>
	I0329 17:45:32.588958  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:45:32.596881  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:45:32.596934  652427 command_runner.go:130] > [Unit]
	I0329 17:45:32.596945  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:45:32.596954  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:45:32.596961  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:45:32.596978  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:45:32.596994  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:45:32.597011  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:45:32.597014  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:45:32.597018  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:45:32.597024  652427 command_runner.go:130] > [Service]
	I0329 17:45:32.597028  652427 command_runner.go:130] > Type=notify
	I0329 17:45:32.597033  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:45:32.597043  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:45:32.597069  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:45:32.597085  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:45:32.597098  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:45:32.597111  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:45:32.597123  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:45:32.597138  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:45:32.597153  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:45:32.597166  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:45:32.597175  652427 command_runner.go:130] > ExecStart=
	I0329 17:45:32.597198  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:45:32.597209  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:45:32.597215  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:45:32.597227  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:45:32.597235  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:45:32.597238  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:45:32.597245  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:45:32.597250  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:45:32.597254  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:45:32.597261  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:45:32.597264  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:45:32.597275  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:45:32.597283  652427 command_runner.go:130] > Delegate=yes
	I0329 17:45:32.597288  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:45:32.597295  652427 command_runner.go:130] > KillMode=process
	I0329 17:45:32.597298  652427 command_runner.go:130] > [Install]
	I0329 17:45:32.597306  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:45:32.597716  652427 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 17:45:32.597774  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 17:45:32.606512  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 17:45:32.618200  652427 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0329 17:45:32.618223  652427 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0329 17:45:32.618272  652427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 17:45:32.692673  652427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 17:45:32.768369  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:45:32.776590  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:45:32.776718  652427 command_runner.go:130] > [Unit]
	I0329 17:45:32.776734  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:45:32.776743  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:45:32.776757  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:45:32.776770  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:45:32.776781  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:45:32.776790  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:45:32.776801  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:45:32.776811  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:45:32.776822  652427 command_runner.go:130] > [Service]
	I0329 17:45:32.776832  652427 command_runner.go:130] > Type=notify
	I0329 17:45:32.776838  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:45:32.776851  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:45:32.776866  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:45:32.776881  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:45:32.776895  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:45:32.776910  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:45:32.776924  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:45:32.776939  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:45:32.776954  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:45:32.776968  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:45:32.776983  652427 command_runner.go:130] > ExecStart=
	I0329 17:45:32.777006  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:45:32.777019  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:45:32.777034  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:45:32.777048  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:45:32.777068  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:45:32.777077  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:45:32.777088  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:45:32.777098  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:45:32.777110  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:45:32.777128  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:45:32.777138  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:45:32.777149  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:45:32.777159  652427 command_runner.go:130] > Delegate=yes
	I0329 17:45:32.777172  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:45:32.777182  652427 command_runner.go:130] > KillMode=process
	I0329 17:45:32.777189  652427 command_runner.go:130] > [Install]
	I0329 17:45:32.777204  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:45:32.777443  652427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 17:45:32.856398  652427 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 17:45:32.865426  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:45:32.901376  652427 command_runner.go:130] > 20.10.13
	I0329 17:45:32.903182  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:45:32.940289  652427 command_runner.go:130] > 20.10.13
	I0329 17:45:32.943790  652427 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 17:45:32.943866  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:45:32.973629  652427 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0329 17:45:32.976814  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:45:32.987651  652427 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0329 17:45:32.987730  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:32.987792  652427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:45:33.017331  652427 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.5
	I0329 17:45:33.017355  652427 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.5
	I0329 17:45:33.017360  652427 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.5
	I0329 17:45:33.017366  652427 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.5
	I0329 17:45:33.017370  652427 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0329 17:45:33.017374  652427 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0329 17:45:33.017378  652427 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0329 17:45:33.017382  652427 command_runner.go:130] > kubernetesui/dashboard:v2.3.1
	I0329 17:45:33.017387  652427 command_runner.go:130] > kubernetesui/metrics-scraper:v1.0.7
	I0329 17:45:33.017391  652427 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:33.019113  652427 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:45:33.019130  652427 docker.go:537] Images already preloaded, skipping extraction
	I0329 17:45:33.019178  652427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:45:33.048467  652427 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.5
	I0329 17:45:33.048496  652427 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.5
	I0329 17:45:33.048505  652427 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.5
	I0329 17:45:33.048512  652427 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.5
	I0329 17:45:33.048516  652427 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0329 17:45:33.048520  652427 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0329 17:45:33.048524  652427 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0329 17:45:33.048530  652427 command_runner.go:130] > kubernetesui/dashboard:v2.3.1
	I0329 17:45:33.048537  652427 command_runner.go:130] > kubernetesui/metrics-scraper:v1.0.7
	I0329 17:45:33.048544  652427 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:33.050283  652427 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:45:33.050327  652427 cache_images.go:84] Images are preloaded, skipping loading
	I0329 17:45:33.050376  652427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 17:45:33.129448  652427 command_runner.go:130] > cgroupfs
	I0329 17:45:33.131302  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:33.131320  652427 cni.go:154] 1 nodes found, recommending kindnet
	I0329 17:45:33.131338  652427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 17:45:33.131358  652427 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220329174520-564087 NodeName:multinode-20220329174520-564087 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 17:45:33.131518  652427 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220329174520-564087"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 17:45:33.131621  652427 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220329174520-564087 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 17:45:33.131687  652427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 17:45:33.138123  652427 command_runner.go:130] > kubeadm
	I0329 17:45:33.138147  652427 command_runner.go:130] > kubectl
	I0329 17:45:33.138153  652427 command_runner.go:130] > kubelet
	I0329 17:45:33.138737  652427 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 17:45:33.138796  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 17:45:33.145401  652427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (409 bytes)
	I0329 17:45:33.157419  652427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 17:45:33.169290  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0329 17:45:33.181295  652427 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 17:45:33.184089  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:45:33.193022  652427 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087 for IP: 192.168.49.2
	I0329 17:45:33.193182  652427 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 17:45:33.193220  652427 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 17:45:33.193271  652427 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key
	I0329 17:45:33.193286  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt with IP's: []
	I0329 17:45:33.382270  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt ...
	I0329 17:45:33.382311  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt: {Name:mkf2670e92ffcd5bb222a702ee708a8cd949c85e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.382527  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key ...
	I0329 17:45:33.382541  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key: {Name:mk2469da8bb447b021f089c5151287d57e91b757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.382625  652427 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2
	I0329 17:45:33.382641  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 17:45:33.597217  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 ...
	I0329 17:45:33.597260  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2: {Name:mke3ca14fd6b4be816e504978b954612dd79105f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.597450  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2 ...
	I0329 17:45:33.597464  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2: {Name:mk0af0fd690c60c1f78cbf327f4dc9aa4e203738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.597541  652427 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt
	I0329 17:45:33.597602  652427 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key
	I0329 17:45:33.597645  652427 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key
	I0329 17:45:33.597658  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt with IP's: []
	I0329 17:45:33.735508  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt ...
	I0329 17:45:33.735548  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt: {Name:mka12619613fc7fd7a45c7c925d374dc9a1bcead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.735740  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key ...
	I0329 17:45:33.735754  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key: {Name:mkf885717686e79c361d511a026f2b3d78adc44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.735831  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0329 17:45:33.735850  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0329 17:45:33.735859  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0329 17:45:33.735874  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0329 17:45:33.735886  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0329 17:45:33.735903  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0329 17:45:33.735914  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0329 17:45:33.735923  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0329 17:45:33.735972  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 17:45:33.736011  652427 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 17:45:33.736023  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 17:45:33.736047  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 17:45:33.736071  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 17:45:33.736092  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 17:45:33.736133  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:45:33.736161  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.736178  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.736190  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem -> /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.736710  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 17:45:33.754152  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 17:45:33.770372  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 17:45:33.786810  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0329 17:45:33.803136  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 17:45:33.819669  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 17:45:33.835849  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 17:45:33.852099  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 17:45:33.868248  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 17:45:33.884501  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 17:45:33.900602  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 17:45:33.916860  652427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 17:45:33.928452  652427 ssh_runner.go:195] Run: openssl version
	I0329 17:45:33.933068  652427 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0329 17:45:33.933136  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 17:45:33.939892  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942755  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942817  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942852  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.948331  652427 command_runner.go:130] > 51391683
	I0329 17:45:33.948531  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 17:45:33.955332  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 17:45:33.962163  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.964845  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.964965  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.965019  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.969377  652427 command_runner.go:130] > 3ec20f2e
	I0329 17:45:33.969573  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 17:45:33.976389  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 17:45:33.983150  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.985837  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.985981  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.986026  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.990398  652427 command_runner.go:130] > b5213941
	I0329 17:45:33.990595  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 17:45:33.997297  652427 kubeadm.go:391] StartCluster: {Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:45:33.997425  652427 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 17:45:34.027789  652427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 17:45:34.034875  652427 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0329 17:45:34.034901  652427 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0329 17:45:34.034907  652427 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0329 17:45:34.034969  652427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 17:45:34.041760  652427 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 17:45:34.041807  652427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 17:45:34.047603  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0329 17:45:34.047633  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0329 17:45:34.047644  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0329 17:45:34.047657  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 17:45:34.048182  652427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 17:45:34.048224  652427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 17:45:34.264667  652427 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1021-gcp\n", err: exit status 1
	I0329 17:45:34.327290  652427 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0329 17:45:44.452040  652427 command_runner.go:130] > [init] Using Kubernetes version: v1.23.5
	I0329 17:45:44.452123  652427 command_runner.go:130] > [preflight] Running pre-flight checks
	I0329 17:45:44.452245  652427 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0329 17:45:44.452323  652427 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1021-gcp
	I0329 17:45:44.452385  652427 command_runner.go:130] > DOCKER_VERSION: 20.10.13
	I0329 17:45:44.452507  652427 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0329 17:45:44.452586  652427 command_runner.go:130] > OS: Linux
	I0329 17:45:44.452711  652427 command_runner.go:130] > CGROUPS_CPU: enabled
	I0329 17:45:44.452819  652427 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0329 17:45:44.452909  652427 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0329 17:45:44.453032  652427 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0329 17:45:44.453160  652427 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0329 17:45:44.453259  652427 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0329 17:45:44.453363  652427 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0329 17:45:44.453465  652427 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0329 17:45:44.453557  652427 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0329 17:45:44.453708  652427 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0329 17:45:44.453831  652427 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0329 17:45:44.455744  652427 out.go:203]   - Generating certificates and keys ...
	I0329 17:45:44.454116  652427 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0329 17:45:44.455845  652427 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0329 17:45:44.455942  652427 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0329 17:45:44.456068  652427 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0329 17:45:44.456141  652427 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0329 17:45:44.456222  652427 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0329 17:45:44.456286  652427 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0329 17:45:44.456353  652427 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0329 17:45:44.456509  652427 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20220329174520-564087] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0329 17:45:44.456576  652427 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0329 17:45:44.456763  652427 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20220329174520-564087] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0329 17:45:44.456843  652427 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0329 17:45:44.456916  652427 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0329 17:45:44.456979  652427 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0329 17:45:44.457079  652427 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0329 17:45:44.457154  652427 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0329 17:45:44.457231  652427 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0329 17:45:44.457306  652427 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0329 17:45:44.457377  652427 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0329 17:45:44.457541  652427 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0329 17:45:44.457643  652427 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0329 17:45:44.457692  652427 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0329 17:45:44.459377  652427 out.go:203]   - Booting up control plane ...
	I0329 17:45:44.457842  652427 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0329 17:45:44.459490  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0329 17:45:44.459598  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0329 17:45:44.459705  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0329 17:45:44.459818  652427 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0329 17:45:44.460032  652427 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0329 17:45:44.460147  652427 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.002212 seconds
	I0329 17:45:44.460329  652427 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0329 17:45:44.460534  652427 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
	I0329 17:45:44.460903  652427 command_runner.go:130] > NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
	I0329 17:45:44.461013  652427 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0329 17:45:44.461374  652427 command_runner.go:130] > [mark-control-plane] Marking the node multinode-20220329174520-564087 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0329 17:45:44.462844  652427 out.go:203]   - Configuring RBAC rules ...
	I0329 17:45:44.461542  652427 command_runner.go:130] > [bootstrap-token] Using token: zawg7g.qjqz7bfgihqm7mmd
	I0329 17:45:44.463002  652427 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0329 17:45:44.463126  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0329 17:45:44.463313  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0329 17:45:44.463461  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0329 17:45:44.463594  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0329 17:45:44.463701  652427 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0329 17:45:44.463824  652427 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0329 17:45:44.463880  652427 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0329 17:45:44.463949  652427 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0329 17:45:44.464031  652427 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0329 17:45:44.464129  652427 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0329 17:45:44.464167  652427 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0329 17:45:44.464243  652427 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0329 17:45:44.464306  652427 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0329 17:45:44.464378  652427 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0329 17:45:44.464417  652427 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0329 17:45:44.464461  652427 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0329 17:45:44.464552  652427 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0329 17:45:44.464634  652427 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0329 17:45:44.464729  652427 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0329 17:45:44.464794  652427 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0329 17:45:44.464873  652427 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token zawg7g.qjqz7bfgihqm7mmd \
	I0329 17:45:44.465009  652427 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 \
	I0329 17:45:44.465034  652427 command_runner.go:130] > 	--control-plane 
	I0329 17:45:44.465155  652427 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0329 17:45:44.465267  652427 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zawg7g.qjqz7bfgihqm7mmd \
	I0329 17:45:44.465435  652427 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 
	I0329 17:45:44.465474  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:44.465488  652427 cni.go:154] 1 nodes found, recommending kindnet
	I0329 17:45:44.467033  652427 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0329 17:45:44.467102  652427 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0329 17:45:44.471007  652427 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0329 17:45:44.471036  652427 command_runner.go:130] >   Size: 2675000   	Blocks: 5232       IO Block: 4096   regular file
	I0329 17:45:44.471048  652427 command_runner.go:130] > Device: 34h/52d	Inode: 8004372     Links: 1
	I0329 17:45:44.471061  652427 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0329 17:45:44.471073  652427 command_runner.go:130] > Access: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:45:44.471087  652427 command_runner.go:130] > Modify: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:45:44.471099  652427 command_runner.go:130] > Change: 2022-03-21 20:07:13.664642338 +0000
	I0329 17:45:44.471110  652427 command_runner.go:130] >  Birth: -
	I0329 17:45:44.471197  652427 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 17:45:44.471215  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0329 17:45:44.546893  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 17:45:45.559214  652427 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0329 17:45:45.562845  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0329 17:45:45.567610  652427 command_runner.go:130] > serviceaccount/kindnet created
	I0329 17:45:45.574049  652427 command_runner.go:130] > daemonset.apps/kindnet created
	I0329 17:45:45.578443  652427 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.031449182s)
	I0329 17:45:45.578512  652427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 17:45:45.578604  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.578610  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=multinode-20220329174520-564087 minikube.k8s.io/updated_at=2022_03_29T17_45_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.585792  652427 command_runner.go:130] > -16
	I0329 17:45:45.669737  652427 command_runner.go:130] > node/multinode-20220329174520-564087 labeled
	I0329 17:45:45.672228  652427 ops.go:34] apiserver oom_adj: -16
	I0329 17:45:45.672289  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0329 17:45:45.672350  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.722907  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:46.223213  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:46.275757  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:46.723249  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:46.777030  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:47.223533  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:47.273570  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:47.723481  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:47.776650  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:48.223209  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:48.274842  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:48.723451  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:48.775854  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:49.223378  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:49.275283  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:49.723945  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:49.776134  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:50.223785  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:50.276830  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:50.723653  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:50.775722  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:51.223217  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:51.272699  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:51.723118  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:51.775142  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:52.223700  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:52.273692  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:52.723656  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:52.775225  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:53.223881  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:53.275870  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:53.723437  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:53.775237  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:54.223887  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:54.275636  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:54.723242  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:54.772350  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:55.223322  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:55.275242  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:55.724148  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:55.775597  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:56.223200  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:56.272110  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:56.723138  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:56.772079  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:57.224104  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:57.273136  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:57.724114  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:57.851740  652427 command_runner.go:130] > NAME      SECRETS   AGE
	I0329 17:45:57.851761  652427 command_runner.go:130] > default   1         0s
	I0329 17:45:57.854086  652427 kubeadm.go:1020] duration metric: took 12.275551592s to wait for elevateKubeSystemPrivileges.
	I0329 17:45:57.854117  652427 kubeadm.go:393] StartCluster complete in 23.856825989s
	I0329 17:45:57.854139  652427 settings.go:142] acquiring lock: {Name:mkf193dd78851319876bf7c47a47f525125a4fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:57.854233  652427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:57.854893  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mke8ff89e3fadc84c0cca24c5855d2fcb9124f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:57.855378  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:57.855634  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:57.856055  652427 cert_rotation.go:137] Starting client certificate rotation controller
	I0329 17:45:57.856285  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:57.856300  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:57.856309  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:57.863324  652427 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0329 17:45:57.863340  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:57.863346  652427 round_trippers.go:580]     Audit-Id: 4a78bb5e-c8d7-4219-9488-6844c3c299cc
	I0329 17:45:57.863350  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:57.863355  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:57.863359  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:57.863363  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:57.863367  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:57.863371  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:57 GMT
	I0329 17:45:57.863393  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"429","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:57.863740  652427 request.go:1181] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"429","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:57.863788  652427 round_trippers.go:463] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:57.863796  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:57.863802  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:57.863807  652427 round_trippers.go:473]     Content-Type: application/json
	I0329 17:45:57.867109  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:45:57.867130  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:57.867139  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:57.867146  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:57.867153  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:57.867160  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:57.867167  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:57 GMT
	I0329 17:45:57.867177  652427 round_trippers.go:580]     Audit-Id: 33c36a87-4234-4541-970d-61535f949807
	I0329 17:45:57.867184  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:57.867213  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"440","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:58.368059  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:58.368082  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.368090  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.370444  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.370466  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.370472  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:58.370477  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.370490  652427 round_trippers.go:580]     Audit-Id: 8c50bb59-405b-42cb-b9fb-76b703236f22
	I0329 17:45:58.370495  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.370499  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.370505  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.370509  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.370533  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"450","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:58.370638  652427 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220329174520-564087" rescaled to 1
	I0329 17:45:58.370686  652427 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:45:58.372157  652427 out.go:176] * Verifying Kubernetes components...
	I0329 17:45:58.372204  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:45:58.370730  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 17:45:58.370757  652427 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 17:45:58.372298  652427 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220329174520-564087"
	I0329 17:45:58.370970  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:45:58.372319  652427 addons.go:65] Setting default-storageclass=true in profile "multinode-20220329174520-564087"
	I0329 17:45:58.372337  652427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220329174520-564087"
	I0329 17:45:58.372347  652427 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220329174520-564087"
	W0329 17:45:58.372371  652427 addons.go:165] addon storage-provisioner should already be in state true
	I0329 17:45:58.372417  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:45:58.372732  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.372884  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.413855  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:58.414073  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:58.414377  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0329 17:45:58.414388  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.414395  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.417608  652427 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:58.417345  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.417694  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.417709  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.417715  652427 round_trippers.go:580]     Content-Length: 109
	I0329 17:45:58.417720  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.417725  652427 round_trippers.go:580]     Audit-Id: b83edd2b-9136-46a1-ae5b-afc9a082cba3
	I0329 17:45:58.417732  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.417737  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.417746  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.417794  652427 request.go:1181] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"451"},"items":[]}
	I0329 17:45:58.417736  652427 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:45:58.417840  652427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 17:45:58.417896  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:58.418116  652427 addons.go:153] Setting addon default-storageclass=true in "multinode-20220329174520-564087"
	W0329 17:45:58.418133  652427 addons.go:165] addon default-storageclass should already be in state true
	I0329 17:45:58.418168  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:45:58.418572  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.447231  652427 command_runner.go:130] > apiVersion: v1
	I0329 17:45:58.447256  652427 command_runner.go:130] > data:
	I0329 17:45:58.447263  652427 command_runner.go:130] >   Corefile: |
	I0329 17:45:58.447269  652427 command_runner.go:130] >     .:53 {
	I0329 17:45:58.447274  652427 command_runner.go:130] >         errors
	I0329 17:45:58.447281  652427 command_runner.go:130] >         health {
	I0329 17:45:58.447287  652427 command_runner.go:130] >            lameduck 5s
	I0329 17:45:58.447292  652427 command_runner.go:130] >         }
	I0329 17:45:58.447296  652427 command_runner.go:130] >         ready
	I0329 17:45:58.447305  652427 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0329 17:45:58.447311  652427 command_runner.go:130] >            pods insecure
	I0329 17:45:58.447318  652427 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0329 17:45:58.447327  652427 command_runner.go:130] >            ttl 30
	I0329 17:45:58.447332  652427 command_runner.go:130] >         }
	I0329 17:45:58.447338  652427 command_runner.go:130] >         prometheus :9153
	I0329 17:45:58.447346  652427 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0329 17:45:58.447353  652427 command_runner.go:130] >            max_concurrent 1000
	I0329 17:45:58.447358  652427 command_runner.go:130] >         }
	I0329 17:45:58.447364  652427 command_runner.go:130] >         cache 30
	I0329 17:45:58.447370  652427 command_runner.go:130] >         loop
	I0329 17:45:58.447375  652427 command_runner.go:130] >         reload
	I0329 17:45:58.447381  652427 command_runner.go:130] >         loadbalance
	I0329 17:45:58.447386  652427 command_runner.go:130] >     }
	I0329 17:45:58.447392  652427 command_runner.go:130] > kind: ConfigMap
	I0329 17:45:58.447397  652427 command_runner.go:130] > metadata:
	I0329 17:45:58.447408  652427 command_runner.go:130] >   creationTimestamp: "2022-03-29T17:45:44Z"
	I0329 17:45:58.447411  652427 command_runner.go:130] >   name: coredns
	I0329 17:45:58.447416  652427 command_runner.go:130] >   namespace: kube-system
	I0329 17:45:58.447428  652427 command_runner.go:130] >   resourceVersion: "269"
	I0329 17:45:58.447436  652427 command_runner.go:130] >   uid: 2dd730ee-e7f4-4bce-ba89-ef3784dbc9a2
	I0329 17:45:58.447602  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 17:45:58.447609  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:58.447893  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:58.448306  652427 node_ready.go:35] waiting up to 6m0s for node "multinode-20220329174520-564087" to be "Ready" ...
	I0329 17:45:58.448384  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:58.448392  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.448402  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.451169  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.451191  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.451198  652427 round_trippers.go:580]     Audit-Id: 33314548-468a-4e46-a188-62a2115de1b9
	I0329 17:45:58.451206  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.451213  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.451219  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.451226  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.451251  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.451376  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:58.458529  652427 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 17:45:58.458553  652427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 17:45:58.458612  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:58.461740  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:58.502451  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:58.665357  652427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:45:58.758795  652427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 17:45:58.767286  652427 command_runner.go:130] > configmap/coredns replaced
	I0329 17:45:58.772545  652427 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0329 17:45:58.952777  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:58.952804  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.952816  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.955597  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.955627  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.955636  652427 round_trippers.go:580]     Audit-Id: a6b5d5bb-636c-4573-a5b9-2e4ba88af60e
	I0329 17:45:58.955644  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.955651  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.955658  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.955665  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.955672  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.955856  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:59.084986  652427 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0329 17:45:59.085018  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0329 17:45:59.085027  652427 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0329 17:45:59.085037  652427 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0329 17:45:59.085044  652427 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0329 17:45:59.085050  652427 command_runner.go:130] > pod/storage-provisioner created
	I0329 17:45:59.085199  652427 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0329 17:45:59.090601  652427 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 17:45:59.090625  652427 addons.go:417] enableAddons completed in 719.880447ms
	I0329 17:45:59.452927  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:59.452952  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:59.452959  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:59.455471  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:59.455497  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:59.455504  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:59.455509  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:59.455513  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:59 GMT
	I0329 17:45:59.455518  652427 round_trippers.go:580]     Audit-Id: ce060bab-7da1-4d39-ba04-82b886cafee7
	I0329 17:45:59.455522  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:59.455527  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:59.455632  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:59.952138  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:59.952166  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:59.952173  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:59.954614  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:59.954634  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:59.954640  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:59.954645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:59.954649  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:59 GMT
	I0329 17:45:59.954654  652427 round_trippers.go:580]     Audit-Id: 81888850-00d1-46f1-b210-087522f93e40
	I0329 17:45:59.954658  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:59.954663  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:59.954790  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:00.452787  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:00.452812  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:00.452820  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:00.455144  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:00.455165  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:00.455171  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:00.455175  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:00 GMT
	I0329 17:46:00.455179  652427 round_trippers.go:580]     Audit-Id: d95452df-5ec6-4caa-94f6-5af73479c326
	I0329 17:46:00.455183  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:00.455188  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:00.455200  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:00.455307  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:00.455624  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:00.952881  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:00.952914  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:00.952924  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:00.955403  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:00.955428  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:00.955438  652427 round_trippers.go:580]     Audit-Id: 4c9d15b9-7b0d-4a33-ac21-04c9710aa4df
	I0329 17:46:00.955444  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:00.955451  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:00.955459  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:00.955467  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:00.955477  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:00 GMT
	I0329 17:46:00.955584  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:01.453213  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:01.453240  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:01.453249  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:01.455468  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:01.455495  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:01.455504  652427 round_trippers.go:580]     Audit-Id: 82ab7820-eeb8-48bc-8425-1445ed6b5712
	I0329 17:46:01.455510  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:01.455517  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:01.455524  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:01.455532  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:01.455549  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:01 GMT
	I0329 17:46:01.455671  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:01.952213  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:01.952237  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:01.952244  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:01.954935  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:01.954963  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:01.954972  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:01.954980  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:01.954986  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:01.954993  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:01 GMT
	I0329 17:46:01.954999  652427 round_trippers.go:580]     Audit-Id: 2e10425c-be6f-415f-9f21-fe2a7ca71ff8
	I0329 17:46:01.955010  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:01.955131  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:02.452430  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:02.452460  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:02.452471  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:02.455326  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:02.455355  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:02.455365  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:02.455372  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:02.455380  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:02.455386  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:02.455393  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:02 GMT
	I0329 17:46:02.455400  652427 round_trippers.go:580]     Audit-Id: c1155a0c-671b-4e47-8ab8-7c7e4ccf418f
	I0329 17:46:02.455515  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:02.455838  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:02.953137  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:02.953164  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:02.953175  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:02.956173  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:02.956200  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:02.956208  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:02.956215  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:02.956221  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:02.956229  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:02 GMT
	I0329 17:46:02.956236  652427 round_trippers.go:580]     Audit-Id: 7b5d9e95-b9d3-40fa-ab1e-0a2ec2f61e50
	I0329 17:46:02.956248  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:02.956348  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:03.452986  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:03.453019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:03.453030  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:03.455902  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:03.455934  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:03.455944  652427 round_trippers.go:580]     Audit-Id: 748c3a93-49d5-4013-a11d-5c4bf933916e
	I0329 17:46:03.455950  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:03.455956  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:03.455963  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:03.455970  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:03.455977  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:03 GMT
	I0329 17:46:03.456159  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:03.952792  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:03.952821  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:03.952832  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:03.955474  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:03.955499  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:03.955508  652427 round_trippers.go:580]     Audit-Id: fc837eb8-3bbe-450f-ad18-43a3ff20b991
	I0329 17:46:03.955515  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:03.955522  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:03.955529  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:03.955536  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:03.955542  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:03 GMT
	I0329 17:46:03.955704  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.452242  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:04.452269  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:04.452276  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:04.454763  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:04.454786  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:04.454793  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:04.454805  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:04.454812  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:04 GMT
	I0329 17:46:04.454821  652427 round_trippers.go:580]     Audit-Id: a2e1a4e9-c534-4fb7-a236-7d5b85716e5c
	I0329 17:46:04.454827  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:04.454834  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:04.454937  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.952472  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:04.952494  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:04.952502  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:04.954824  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:04.954852  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:04.954861  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:04.954867  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:04.954871  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:04.954876  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:04.954880  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:04 GMT
	I0329 17:46:04.954884  652427 round_trippers.go:580]     Audit-Id: 0aae4334-7a50-4066-9046-f853c02d5529
	I0329 17:46:04.954978  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.955309  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:05.452671  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:05.452694  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:05.452702  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:05.455118  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:05.455149  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:05.455159  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:05.455167  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:05.455174  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:05.455180  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:05.455187  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:05 GMT
	I0329 17:46:05.455195  652427 round_trippers.go:580]     Audit-Id: 67d61b81-ef74-4e42-b552-65ab957ea414
	I0329 17:46:05.455378  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:05.952973  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:05.953003  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:05.953014  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:05.955542  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:05.955581  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:05.955591  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:05.955598  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:05.955605  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:05.955613  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:05 GMT
	I0329 17:46:05.955620  652427 round_trippers.go:580]     Audit-Id: 04cfcdeb-84b8-4468-93bb-26cefb531c13
	I0329 17:46:05.955626  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:05.955726  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.452288  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:06.452313  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:06.452321  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:06.455010  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:06.455033  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:06.455041  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:06.455045  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:06.455050  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:06 GMT
	I0329 17:46:06.455056  652427 round_trippers.go:580]     Audit-Id: 9501d640-39b6-4f90-bd20-49156ec50fcc
	I0329 17:46:06.455062  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:06.455067  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:06.455198  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.952729  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:06.952759  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:06.952769  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:06.955312  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:06.955339  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:06.955348  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:06.955356  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:06 GMT
	I0329 17:46:06.955361  652427 round_trippers.go:580]     Audit-Id: 3d2244d6-6978-4659-9238-5f084a30b6cc
	I0329 17:46:06.955365  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:06.955370  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:06.955374  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:06.955466  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.955784  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:07.453069  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:07.453095  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:07.453104  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:07.455540  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:07.455564  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:07.455570  652427 round_trippers.go:580]     Audit-Id: 83b19bf4-0b6a-494a-8cf6-413f37d42a03
	I0329 17:46:07.455574  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:07.455579  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:07.455583  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:07.455587  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:07.455592  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:07 GMT
	I0329 17:46:07.455669  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:07.952425  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:07.952452  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:07.952464  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:07.955002  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:07.955033  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:07.955043  652427 round_trippers.go:580]     Audit-Id: f4a6adb1-e087-4261-8db6-c8584dc0aae9
	I0329 17:46:07.955051  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:07.955058  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:07.955071  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:07.955078  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:07.955084  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:07 GMT
	I0329 17:46:07.955193  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.452855  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:08.452884  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:08.452894  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:08.455304  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:08.455327  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:08.455333  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:08 GMT
	I0329 17:46:08.455338  652427 round_trippers.go:580]     Audit-Id: e0f83135-de42-40e5-9849-3a40561c2f49
	I0329 17:46:08.455343  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:08.455347  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:08.455352  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:08.455358  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:08.455460  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.953100  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:08.953123  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:08.953130  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:08.955628  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:08.955657  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:08.955667  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:08.955674  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:08.955681  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:08 GMT
	I0329 17:46:08.955688  652427 round_trippers.go:580]     Audit-Id: 4487def9-5d98-49a8-970e-af577ace467b
	I0329 17:46:08.955741  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:08.955761  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:08.955884  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.956297  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:09.452316  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:09.452339  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:09.452349  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:09.454784  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:09.454809  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:09.454818  652427 round_trippers.go:580]     Audit-Id: b6d3dd39-ed47-4871-ad4b-9d81c0f2fa2f
	I0329 17:46:09.454825  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:09.454832  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:09.454838  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:09.454846  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:09.454857  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:09 GMT
	I0329 17:46:09.455000  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:09.952623  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:09.952648  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:09.952655  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:09.955221  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:09.955244  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:09.955250  652427 round_trippers.go:580]     Audit-Id: 99f251b9-28d8-448f-b5b2-23d4f695c5b8
	I0329 17:46:09.955255  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:09.955259  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:09.955266  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:09.955274  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:09.955281  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:09 GMT
	I0329 17:46:09.955444  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:10.452175  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:10.452201  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:10.452208  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:10.454680  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:10.454704  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:10.454712  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:10.454720  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:10.454726  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:10.454733  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:10 GMT
	I0329 17:46:10.454739  652427 round_trippers.go:580]     Audit-Id: b0608705-4c9c-43af-a092-a70d79dcb548
	I0329 17:46:10.454745  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:10.454856  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:10.952220  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:10.952246  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:10.952253  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:10.954635  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:10.954659  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:10.954666  652427 round_trippers.go:580]     Audit-Id: 07ca974c-f0bd-4f52-b11f-6bb2b40add9a
	I0329 17:46:10.954671  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:10.954676  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:10.954680  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:10.954685  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:10.954695  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:10 GMT
	I0329 17:46:10.954831  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:11.452367  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:11.452392  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:11.452399  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:11.454620  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:11.454646  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:11.454655  652427 round_trippers.go:580]     Audit-Id: df0c9269-cb59-46cc-b8df-e33dc0c2b20a
	I0329 17:46:11.454662  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:11.454669  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:11.454676  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:11.454687  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:11.454695  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:11 GMT
	I0329 17:46:11.454825  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:11.455175  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:11.952355  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:11.952383  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:11.952391  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:11.954806  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:11.954829  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:11.954836  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:11.954841  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:11.954845  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:11.954850  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:11 GMT
	I0329 17:46:11.954854  652427 round_trippers.go:580]     Audit-Id: 5f6403f9-7b6f-473a-9ac8-3a57f700e0ec
	I0329 17:46:11.954859  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:11.954963  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:12.452474  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:12.452500  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:12.452507  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:12.455006  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:12.455034  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:12.455041  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:12.455045  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:12.455050  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:12.455054  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:12 GMT
	I0329 17:46:12.455059  652427 round_trippers.go:580]     Audit-Id: 02cd31b4-4450-4d0b-b716-edb25ec4b28e
	I0329 17:46:12.455063  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:12.455184  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:12.952721  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:12.952746  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:12.952760  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:12.955005  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:12.955031  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:12.955040  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:12.955046  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:12.955053  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:12.955061  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:12 GMT
	I0329 17:46:12.955071  652427 round_trippers.go:580]     Audit-Id: c2ca7a5a-f0f7-4e96-88c1-7f978288d21f
	I0329 17:46:12.955078  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:12.955167  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:13.452883  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:13.452912  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:13.452920  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:13.455238  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:13.455257  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:13.455264  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:13.455269  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:13.455275  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:13 GMT
	I0329 17:46:13.455288  652427 round_trippers.go:580]     Audit-Id: 2a8257d7-328e-44e6-8e1c-62bee7baee55
	I0329 17:46:13.455300  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:13.455311  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:13.455463  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:13.455809  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:13.952994  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:13.953022  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:13.953031  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:13.955520  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:13.955541  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:13.955547  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:13.955552  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:13.955556  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:13 GMT
	I0329 17:46:13.955560  652427 round_trippers.go:580]     Audit-Id: b9262a57-33b2-4301-80ff-1c60c4a98a8f
	I0329 17:46:13.955564  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:13.955569  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:13.955677  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:14.452222  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:14.452249  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:14.452260  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:14.454563  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:14.454584  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:14.454590  652427 round_trippers.go:580]     Audit-Id: e514765c-1c3e-49e7-822b-6e829d5dd74d
	I0329 17:46:14.454595  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:14.454599  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:14.454605  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:14.454612  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:14.454619  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:14 GMT
	I0329 17:46:14.454735  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:14.952276  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:14.952302  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:14.952310  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:14.954724  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:14.954753  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:14.954762  652427 round_trippers.go:580]     Audit-Id: 9ed1258a-72c2-49e2-a8e3-88d139b7fade
	I0329 17:46:14.954769  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:14.954777  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:14.954784  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:14.954792  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:14.954799  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:14 GMT
	I0329 17:46:14.954914  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:15.452614  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.452638  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.452646  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.454814  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.454839  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.454848  652427 round_trippers.go:580]     Audit-Id: 5a116d81-f7db-49bd-8e1a-1a2dfaefc3d6
	I0329 17:46:15.454856  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.454863  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.454870  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.454876  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.454881  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.455000  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:15.455397  652427 node_ready.go:49] node "multinode-20220329174520-564087" has status "Ready":"True"
	I0329 17:46:15.455423  652427 node_ready.go:38] duration metric: took 17.007097337s waiting for node "multinode-20220329174520-564087" to be "Ready" ...
	I0329 17:46:15.455435  652427 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:15.455554  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:15.455567  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.455583  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.458597  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.458625  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.458635  652427 round_trippers.go:580]     Audit-Id: f859ad80-6fa5-406f-a568-7c98a009b2aa
	I0329 17:46:15.458642  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.458651  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.458658  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.458666  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.458673  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.459036  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"487"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55643 chars]
	I0329 17:46:15.463172  652427 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:15.463253  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:15.463265  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.463275  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.465191  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:15.465210  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.465216  652427 round_trippers.go:580]     Audit-Id: ff77fb91-6575-476d-b708-86cdd6f9baed
	I0329 17:46:15.465221  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.465228  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.465235  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.465246  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.465258  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.465383  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:15.465769  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.465783  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.465790  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.467422  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:15.467440  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.467448  652427 round_trippers.go:580]     Audit-Id: e48cb016-ce23-4af0-99fc-3131ccc8894a
	I0329 17:46:15.467456  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.467463  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.467470  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.467479  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.467484  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.467607  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:15.968339  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:15.968372  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.968384  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.973042  652427 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0329 17:46:15.973089  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.973098  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.973106  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.973112  652427 round_trippers.go:580]     Audit-Id: c342f6e6-506a-4199-919a-49cdac91c576
	I0329 17:46:15.973118  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.973124  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.973132  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.973279  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:15.973753  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.973770  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.973777  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.975817  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.975839  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.975847  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.975854  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.975860  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.975867  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.975873  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.975880  652427 round_trippers.go:580]     Audit-Id: c757f294-067c-4ba6-8808-5df87aebe383
	I0329 17:46:15.976021  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:16.468632  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:16.468660  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.468673  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.471280  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:16.471310  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.471319  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.471327  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.471334  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.471341  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.471347  652427 round_trippers.go:580]     Audit-Id: 352eff47-644f-4cca-ba1a-20e8c50f3b7a
	I0329 17:46:16.471353  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.471492  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:16.472121  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:16.472142  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.472152  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.473948  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:16.473974  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.473983  652427 round_trippers.go:580]     Audit-Id: 12cc25f2-0339-4156-82ff-275c5129d565
	I0329 17:46:16.473990  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.473997  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.474004  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.474013  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.474023  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.474122  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:16.968749  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:16.968782  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.968792  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.971866  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:46:16.971896  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.971908  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.971916  652427 round_trippers.go:580]     Audit-Id: f81cc77b-406c-429a-8221-e23406850077
	I0329 17:46:16.971929  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.971942  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.971954  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.971961  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.972148  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:16.972625  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:16.972642  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.972652  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.974550  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:16.974574  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.974582  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.974589  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.974595  652427 round_trippers.go:580]     Audit-Id: eb5bb099-a121-42ca-b11a-c7fe8f8e6b6e
	I0329 17:46:16.974603  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.974614  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.974622  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.974740  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.468304  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:17.468337  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.468345  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.470738  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.470760  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.470766  652427 round_trippers.go:580]     Audit-Id: 27ec085f-1c7f-45ac-b700-ca0ffcca1900
	I0329 17:46:17.470770  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.470776  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.470782  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.470789  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.470796  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.470916  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5986 chars]
	I0329 17:46:17.471369  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.471385  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.471392  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.473208  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.473229  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.473235  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.473240  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.473244  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.473248  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.473252  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.473256  652427 round_trippers.go:580]     Audit-Id: 7293679e-eb81-43b3-a157-58d13349713a
	I0329 17:46:17.473373  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.473702  652427 pod_ready.go:92] pod "coredns-64897985d-6tcql" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.473738  652427 pod_ready.go:81] duration metric: took 2.010541516s waiting for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.473751  652427 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.473794  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220329174520-564087
	I0329 17:46:17.473802  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.473808  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.475663  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.475684  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.475689  652427 round_trippers.go:580]     Audit-Id: 29c9a7bf-9c5d-4845-8114-9ba425c02b3e
	I0329 17:46:17.475694  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.475698  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.475703  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.475708  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.475714  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.475822  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220329174520-564087","namespace":"kube-system","uid":"ac5cd989-3ac7-4d02-94c0-0c2843391dfe","resourceVersion":"433","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.mirror":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.seen":"2022-03-29T17:45:44.427846936Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"Fiel
dsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube [truncated 5818 chars]
	I0329 17:46:17.476198  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.476212  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.476218  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.477874  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.477898  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.477908  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.477915  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.477930  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.477942  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.477954  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.477965  652427 round_trippers.go:580]     Audit-Id: 1e666c0e-abe8-4fed-b5ca-c2fa024d5633
	I0329 17:46:17.478041  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.478378  652427 pod_ready.go:92] pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.478410  652427 pod_ready.go:81] duration metric: took 4.638363ms waiting for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.478433  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.478498  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220329174520-564087
	I0329 17:46:17.478513  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.478522  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.480244  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.480261  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.480269  652427 round_trippers.go:580]     Audit-Id: 056c8546-3cb7-4ce3-982c-f2fa9315662f
	I0329 17:46:17.480276  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.480283  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.480293  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.480310  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.480317  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.480452  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220329174520-564087","namespace":"kube-system","uid":"112c5d83-654f-4235-9e38-a435d3f2d433","resourceVersion":"363","creationTimestamp":"2022-03-29T17:45:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.mirror":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.seen":"2022-03-29T17:45:37.376573317Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17
:45:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 8327 chars]
	I0329 17:46:17.480880  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.480895  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.480900  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.482414  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.482442  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.482449  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.482454  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.482461  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.482468  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.482485  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.482492  652427 round_trippers.go:580]     Audit-Id: 0c8430bd-4516-4580-90d7-85b9217ea3c8
	I0329 17:46:17.482631  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.482944  652427 pod_ready.go:92] pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.482959  652427 pod_ready.go:81] duration metric: took 4.512997ms waiting for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.482968  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.483011  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220329174520-564087
	I0329 17:46:17.483019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.483024  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.484605  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.484621  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.484626  652427 round_trippers.go:580]     Audit-Id: 5d0d73a4-b0b0-4217-8288-b31bd4698a5d
	I0329 17:46:17.484631  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.484636  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.484640  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.484645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.484651  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.484740  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220329174520-564087","namespace":"kube-system","uid":"66589d1b-e363-4195-bbc3-4ff12b3bf3cf","resourceVersion":"370","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.mirror":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.seen":"2022-03-29T17:45:44.427888140Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 7902 chars]
	I0329 17:46:17.485143  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.485157  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.485163  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.486766  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.486785  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.486792  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.486799  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.486806  652427 round_trippers.go:580]     Audit-Id: 9f0c79d0-9283-473d-bd38-8296174b048f
	I0329 17:46:17.486818  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.486829  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.486839  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.486918  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.487213  652427 pod_ready.go:92] pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.487228  652427 pod_ready.go:81] duration metric: took 4.252998ms waiting for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.487239  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.487285  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:17.487295  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.487305  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.488877  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.488892  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.488899  652427 round_trippers.go:580]     Audit-Id: c94e0991-3439-44f6-b9a8-d55c72d17235
	I0329 17:46:17.488905  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.488912  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.488919  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.488926  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.488937  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.489029  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29kjv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ca1dbe90-6525-4660-81a7-68b2c47378da","resourceVersion":"468","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5551 chars]
	I0329 17:46:17.489390  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.489404  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.489410  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.490733  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.490746  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.490753  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.490760  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.490777  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.490788  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.490802  652427 round_trippers.go:580]     Audit-Id: c0f5c346-4149-48a0-a943-8debe20d6493
	I0329 17:46:17.490809  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.490935  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.491340  652427 pod_ready.go:92] pod "kube-proxy-29kjv" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.491354  652427 pod_ready.go:81] duration metric: took 4.107823ms waiting for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.491366  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.668754  652427 request.go:597] Waited for 177.326154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:17.668816  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:17.668828  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.668836  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.671489  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.671513  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.671520  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.671524  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.671529  652427 round_trippers.go:580]     Audit-Id: 5d2e3dce-cddf-439f-b1fe-b58b2bdb851f
	I0329 17:46:17.671533  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.671537  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.671542  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.671684  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220329174520-564087","namespace":"kube-system","uid":"4ba1ded4-06a3-44a7-922f-b02863ff0da0","resourceVersion":"369","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.mirror":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.seen":"2022-03-29T17:45:44.427890891Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:
kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kub [truncated 4784 chars]
	I0329 17:46:17.869132  652427 request.go:597] Waited for 197.047985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.869193  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.869198  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.869205  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.871752  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.871773  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.871781  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.871787  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.871793  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.871800  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.871806  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.871817  652427 round_trippers.go:580]     Audit-Id: 39e9f7b8-d734-453f-988c-6c376df9d045
	I0329 17:46:17.871947  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.872273  652427 pod_ready.go:92] pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.872293  652427 pod_ready.go:81] duration metric: took 380.916917ms waiting for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.872308  652427 pod_ready.go:38] duration metric: took 2.416830572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:17.872343  652427 api_server.go:51] waiting for apiserver process to appear ...
	I0329 17:46:17.872393  652427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 17:46:17.881985  652427 command_runner.go:130] > 1711
	I0329 17:46:17.882022  652427 api_server.go:71] duration metric: took 19.511312052s to wait for apiserver process to appear ...
	I0329 17:46:17.882032  652427 api_server.go:87] waiting for apiserver healthz status ...
	I0329 17:46:17.882043  652427 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0329 17:46:17.886427  652427 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0329 17:46:17.886481  652427 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0329 17:46:17.886486  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.886493  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.887114  652427 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0329 17:46:17.887132  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.887140  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.887148  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.887155  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.887162  652427 round_trippers.go:580]     Content-Length: 263
	I0329 17:46:17.887177  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.887181  652427 round_trippers.go:580]     Audit-Id: 0b16d397-f192-4f1f-95ac-dbcfea22cc5e
	I0329 17:46:17.887186  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.887205  652427 request.go:1181] Response Body: {
	  "major": "1",
	  "minor": "23",
	  "gitVersion": "v1.23.5",
	  "gitCommit": "c285e781331a3785a7f436042c65c5641ce8a9e9",
	  "gitTreeState": "clean",
	  "buildDate": "2022-03-16T15:52:18Z",
	  "goVersion": "go1.17.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0329 17:46:17.887283  652427 api_server.go:140] control plane version: v1.23.5
	I0329 17:46:17.887297  652427 api_server.go:130] duration metric: took 5.260463ms to wait for apiserver health ...
	I0329 17:46:17.887303  652427 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 17:46:18.068636  652427 request.go:597] Waited for 181.269107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.068718  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.068746  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.068754  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.071771  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:46:18.071791  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.071797  652427 round_trippers.go:580]     Audit-Id: 095e2ec8-e6b2-4c55-9f4a-4e554a84c0a4
	I0329 17:46:18.071801  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.071806  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.071810  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.071815  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.071819  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.072360  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55754 chars]
	I0329 17:46:18.074140  652427 system_pods.go:59] 8 kube-system pods found
	I0329 17:46:18.074181  652427 system_pods.go:61] "coredns-64897985d-6tcql" [a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2] Running
	I0329 17:46:18.074193  652427 system_pods.go:61] "etcd-multinode-20220329174520-564087" [ac5cd989-3ac7-4d02-94c0-0c2843391dfe] Running
	I0329 17:46:18.074201  652427 system_pods.go:61] "kindnet-7hm65" [8d9c821d-cc40-4073-95ab-b810b61210a7] Running
	I0329 17:46:18.074209  652427 system_pods.go:61] "kube-apiserver-multinode-20220329174520-564087" [112c5d83-654f-4235-9e38-a435d3f2d433] Running
	I0329 17:46:18.074220  652427 system_pods.go:61] "kube-controller-manager-multinode-20220329174520-564087" [66589d1b-e363-4195-bbc3-4ff12b3bf3cf] Running
	I0329 17:46:18.074238  652427 system_pods.go:61] "kube-proxy-29kjv" [ca1dbe90-6525-4660-81a7-68b2c47378da] Running
	I0329 17:46:18.074245  652427 system_pods.go:61] "kube-scheduler-multinode-20220329174520-564087" [4ba1ded4-06a3-44a7-922f-b02863ff0da0] Running
	I0329 17:46:18.074251  652427 system_pods.go:61] "storage-provisioner" [7d9d3f42-beb4-4d9d-82ac-3984ac52c132] Running
	I0329 17:46:18.074263  652427 system_pods.go:74] duration metric: took 186.954024ms to wait for pod list to return data ...
	I0329 17:46:18.074277  652427 default_sa.go:34] waiting for default service account to be created ...
	I0329 17:46:18.268750  652427 request.go:597] Waited for 194.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0329 17:46:18.268819  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0329 17:46:18.268826  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.268848  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.271155  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:18.271177  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.271188  652427 round_trippers.go:580]     Content-Length: 304
	I0329 17:46:18.271192  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.271197  652427 round_trippers.go:580]     Audit-Id: cbcb0903-9e90-4114-85e1-fa7a6c1f515f
	I0329 17:46:18.271204  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.271211  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.271222  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.271229  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.271258  652427 request.go:1181] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"89609a58-81bb-4b4a-bdd9-152993280465","resourceVersion":"384","creationTimestamp":"2022-03-29T17:45:57Z"},"secrets":[{"name":"default-token-vh5wm"}]}]}
	I0329 17:46:18.271467  652427 default_sa.go:45] found service account: "default"
	I0329 17:46:18.271484  652427 default_sa.go:55] duration metric: took 197.197498ms for default service account to be created ...
	I0329 17:46:18.271491  652427 system_pods.go:116] waiting for k8s-apps to be running ...
	I0329 17:46:18.468930  652427 request.go:597] Waited for 197.338154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.468982  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.468987  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.468994  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.473968  652427 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0329 17:46:18.473999  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.474018  652427 round_trippers.go:580]     Audit-Id: ead69c26-9a5d-4812-a5d3-7fc3e0e396e4
	I0329 17:46:18.474025  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.474032  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.474038  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.474052  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.474059  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.475192  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55754 chars]
	I0329 17:46:18.477578  652427 system_pods.go:86] 8 kube-system pods found
	I0329 17:46:18.477613  652427 system_pods.go:89] "coredns-64897985d-6tcql" [a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2] Running
	I0329 17:46:18.477621  652427 system_pods.go:89] "etcd-multinode-20220329174520-564087" [ac5cd989-3ac7-4d02-94c0-0c2843391dfe] Running
	I0329 17:46:18.477627  652427 system_pods.go:89] "kindnet-7hm65" [8d9c821d-cc40-4073-95ab-b810b61210a7] Running
	I0329 17:46:18.477633  652427 system_pods.go:89] "kube-apiserver-multinode-20220329174520-564087" [112c5d83-654f-4235-9e38-a435d3f2d433] Running
	I0329 17:46:18.477640  652427 system_pods.go:89] "kube-controller-manager-multinode-20220329174520-564087" [66589d1b-e363-4195-bbc3-4ff12b3bf3cf] Running
	I0329 17:46:18.477652  652427 system_pods.go:89] "kube-proxy-29kjv" [ca1dbe90-6525-4660-81a7-68b2c47378da] Running
	I0329 17:46:18.477657  652427 system_pods.go:89] "kube-scheduler-multinode-20220329174520-564087" [4ba1ded4-06a3-44a7-922f-b02863ff0da0] Running
	I0329 17:46:18.477670  652427 system_pods.go:89] "storage-provisioner" [7d9d3f42-beb4-4d9d-82ac-3984ac52c132] Running
	I0329 17:46:18.477678  652427 system_pods.go:126] duration metric: took 206.181639ms to wait for k8s-apps to be running ...
	I0329 17:46:18.477692  652427 system_svc.go:44] waiting for kubelet service to be running ....
	I0329 17:46:18.477740  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:18.491034  652427 system_svc.go:56] duration metric: took 13.333193ms WaitForService to wait for kubelet.
	I0329 17:46:18.491059  652427 kubeadm.go:548] duration metric: took 20.120349922s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0329 17:46:18.491078  652427 node_conditions.go:102] verifying NodePressure condition ...
	I0329 17:46:18.668395  652427 request.go:597] Waited for 177.238607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:18.668474  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:18.668490  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.668498  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.671002  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:18.671034  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.671040  652427 round_trippers.go:580]     Audit-Id: e64adfe9-d069-446e-bc45-1bc0796f0b85
	I0329 17:46:18.671045  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.671049  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.671055  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.671065  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.671081  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.671196  652427 request.go:1181] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","vol
umes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi [truncated 5296 chars]
	I0329 17:46:18.671557  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:18.671577  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:18.671589  652427 node_conditions.go:105] duration metric: took 180.507051ms to run NodePressure ...
	I0329 17:46:18.671604  652427 start.go:213] waiting for startup goroutines ...
	I0329 17:46:18.673932  652427 out.go:176] 
	I0329 17:46:18.674128  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:18.674206  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:18.676080  652427 out.go:176] * Starting worker node multinode-20220329174520-564087-m02 in cluster multinode-20220329174520-564087
	I0329 17:46:18.676110  652427 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:46:18.677788  652427 out.go:176] * Pulling base image ...
	I0329 17:46:18.677823  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:46:18.677844  652427 cache.go:57] Caching tarball of preloaded images
	I0329 17:46:18.677911  652427 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:46:18.677993  652427 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 17:46:18.678015  652427 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:46:18.678095  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:18.723196  652427 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:46:18.723227  652427 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:46:18.723246  652427 cache.go:208] Successfully downloaded all kic artifacts
	I0329 17:46:18.723288  652427 start.go:348] acquiring machines lock for multinode-20220329174520-564087-m02: {Name:mk2e91789fb1ab42dd81da420c805ce0e9722cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 17:46:18.723439  652427 start.go:352] acquired machines lock for "multinode-20220329174520-564087-m02" in 125.766µs
	I0329 17:46:18.723467  652427 start.go:90] Provisioning new machine with config: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name:m02 IP: Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:18.723578  652427 start.go:127] createHost starting for "m02" (driver="docker")
	I0329 17:46:18.726752  652427 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0329 17:46:18.726864  652427 start.go:161] libmachine.API.Create for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:46:18.726895  652427 client.go:168] LocalClient.Create starting
	I0329 17:46:18.726980  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 17:46:18.727018  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:46:18.727043  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:46:18.727103  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 17:46:18.727128  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:46:18.727146  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:46:18.727418  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:46:18.759003  652427 network_create.go:75] Found existing network {name:multinode-20220329174520-564087 subnet:0xc000b6ee70 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0329 17:46:18.759045  652427 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20220329174520-564087-m02" container
	I0329 17:46:18.759107  652427 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 17:46:18.790190  652427 cli_runner.go:133] Run: docker volume create multinode-20220329174520-564087-m02 --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --label created_by.minikube.sigs.k8s.io=true
	I0329 17:46:18.822198  652427 oci.go:102] Successfully created a docker volume multinode-20220329174520-564087-m02
	I0329 17:46:18.822278  652427 cli_runner.go:133] Run: docker run --rm --name multinode-20220329174520-564087-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --entrypoint /usr/bin/test -v multinode-20220329174520-564087-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 17:46:19.362957  652427 oci.go:106] Successfully prepared a docker volume multinode-20220329174520-564087-m02
	I0329 17:46:19.363011  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:46:19.363038  652427 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 17:46:19.363116  652427 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 17:46:27.622548  652427 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.259374052s)
	I0329 17:46:27.622582  652427 kic.go:188] duration metric: took 8.259542 seconds to extract preloaded images to volume
	W0329 17:46:27.622637  652427 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 17:46:27.622651  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 17:46:27.622694  652427 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 17:46:27.708276  652427 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220329174520-564087-m02 --name multinode-20220329174520-564087-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --network multinode-20220329174520-564087 --ip 192.168.49.3 --volume multinode-20220329174520-564087-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 17:46:28.111983  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Running}}
	I0329 17:46:28.145714  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.180625  652427 cli_runner.go:133] Run: docker exec multinode-20220329174520-564087-m02 stat /var/lib/dpkg/alternatives/iptables
	I0329 17:46:28.243280  652427 oci.go:278] the created container "multinode-20220329174520-564087-m02" has a running status.
	I0329 17:46:28.243315  652427 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa...
	I0329 17:46:28.352947  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0329 17:46:28.353011  652427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 17:46:28.439794  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.474473  652427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 17:46:28.474496  652427 kic_runner.go:114] Args: [docker exec --privileged multinode-20220329174520-564087-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 17:46:28.564773  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.597591  652427 machine.go:88] provisioning docker machine ...
	I0329 17:46:28.597636  652427 ubuntu.go:169] provisioning hostname "multinode-20220329174520-564087-m02"
	I0329 17:46:28.597701  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:28.632823  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:28.633110  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:28.633138  652427 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20220329174520-564087-m02 && echo "multinode-20220329174520-564087-m02" | sudo tee /etc/hostname
	I0329 17:46:28.766115  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20220329174520-564087-m02
	
	I0329 17:46:28.766206  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:28.797826  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:28.797996  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:28.798026  652427 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220329174520-564087-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220329174520-564087-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220329174520-564087-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 17:46:28.916911  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 17:46:28.916949  652427 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 17:46:28.916974  652427 ubuntu.go:177] setting up certificates
	I0329 17:46:28.916985  652427 provision.go:83] configureAuth start
	I0329 17:46:28.917046  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:28.949268  652427 provision.go:138] copyHostCerts
	I0329 17:46:28.949312  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:46:28.949352  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 17:46:28.949364  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:46:28.949439  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 17:46:28.949531  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:46:28.949559  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 17:46:28.949568  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:46:28.949603  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 17:46:28.949656  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:46:28.949683  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 17:46:28.949694  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:46:28.949724  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 17:46:28.949779  652427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.multinode-20220329174520-564087-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220329174520-564087-m02]
	I0329 17:46:29.106466  652427 provision.go:172] copyRemoteCerts
	I0329 17:46:29.106538  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 17:46:29.106577  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.139382  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:29.228429  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0329 17:46:29.228500  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0329 17:46:29.245652  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0329 17:46:29.245711  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0329 17:46:29.262283  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0329 17:46:29.262348  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 17:46:29.279716  652427 provision.go:86] duration metric: configureAuth took 362.712391ms
	I0329 17:46:29.279748  652427 ubuntu.go:193] setting minikube options for container-runtime
	I0329 17:46:29.279938  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:29.279989  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.312282  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.312430  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.312444  652427 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 17:46:29.429214  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 17:46:29.429264  652427 ubuntu.go:71] root file system type: overlay
	I0329 17:46:29.429486  652427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 17:46:29.429555  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.462121  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.462284  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.462382  652427 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 17:46:29.590118  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 17:46:29.590193  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.622550  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.622701  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.622720  652427 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 17:46:30.245909  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 17:46:29.582120620 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 17:46:30.245951  652427 machine.go:91] provisioned docker machine in 1.648333276s
	I0329 17:46:30.245970  652427 client.go:171] LocalClient.Create took 11.519055245s
	I0329 17:46:30.245989  652427 start.go:169] duration metric: libmachine.API.Create for "multinode-20220329174520-564087" took 11.519124811s
	I0329 17:46:30.246003  652427 start.go:302] post-start starting for "multinode-20220329174520-564087-m02" (driver="docker")
	I0329 17:46:30.246012  652427 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 17:46:30.246078  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 17:46:30.246130  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.277490  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.364671  652427 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 17:46:30.367418  652427 command_runner.go:130] > NAME="Ubuntu"
	I0329 17:46:30.367445  652427 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0329 17:46:30.367453  652427 command_runner.go:130] > ID=ubuntu
	I0329 17:46:30.367460  652427 command_runner.go:130] > ID_LIKE=debian
	I0329 17:46:30.367468  652427 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0329 17:46:30.367474  652427 command_runner.go:130] > VERSION_ID="20.04"
	I0329 17:46:30.367485  652427 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0329 17:46:30.367497  652427 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0329 17:46:30.367506  652427 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0329 17:46:30.367539  652427 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0329 17:46:30.367549  652427 command_runner.go:130] > VERSION_CODENAME=focal
	I0329 17:46:30.367553  652427 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0329 17:46:30.367651  652427 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 17:46:30.367671  652427 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 17:46:30.367686  652427 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 17:46:30.367698  652427 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 17:46:30.367710  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 17:46:30.367769  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 17:46:30.367850  652427 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 17:46:30.367863  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /etc/ssl/certs/5640872.pem
	I0329 17:46:30.367965  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 17:46:30.374493  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:46:30.391498  652427 start.go:305] post-start completed in 145.478012ms
	I0329 17:46:30.391818  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:30.423636  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:30.423930  652427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:46:30.423987  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.455928  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.537369  652427 command_runner.go:130] > 18%!
	(MISSING)I0329 17:46:30.537705  652427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 17:46:30.541289  652427 command_runner.go:130] > 241G
	I0329 17:46:30.541553  652427 start.go:130] duration metric: createHost completed in 11.817959572s
	I0329 17:46:30.541574  652427 start.go:81] releasing machines lock for "multinode-20220329174520-564087-m02", held for 11.818121343s
	I0329 17:46:30.541656  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:30.574878  652427 out.go:176] * Found network options:
	I0329 17:46:30.576445  652427 out.go:176]   - NO_PROXY=192.168.49.2
	W0329 17:46:30.576500  652427 proxy.go:118] fail to check proxy env: Error ip not in block
	W0329 17:46:30.576558  652427 proxy.go:118] fail to check proxy env: Error ip not in block
	I0329 17:46:30.576640  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 17:46:30.576692  652427 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 17:46:30.576750  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.576695  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.609630  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.610562  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.698938  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:46:30.833371  652427 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0329 17:46:30.833400  652427 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0329 17:46:30.833416  652427 command_runner.go:130] > <H1>302 Moved</H1>
	I0329 17:46:30.833423  652427 command_runner.go:130] > The document has moved
	I0329 17:46:30.833432  652427 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0329 17:46:30.833437  652427 command_runner.go:130] > </BODY></HTML>
	I0329 17:46:30.833498  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:46:30.833506  652427 command_runner.go:130] > [Unit]
	I0329 17:46:30.833512  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:46:30.833518  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:46:30.833528  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:46:30.833538  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:46:30.833549  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:46:30.833560  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:46:30.833570  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:46:30.833576  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:46:30.833586  652427 command_runner.go:130] > [Service]
	I0329 17:46:30.833598  652427 command_runner.go:130] > Type=notify
	I0329 17:46:30.833601  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:46:30.833611  652427 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0329 17:46:30.833623  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:46:30.833639  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:46:30.833653  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:46:30.833667  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:46:30.833680  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:46:30.833694  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:46:30.833710  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:46:30.833726  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:46:30.833743  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:46:30.833753  652427 command_runner.go:130] > ExecStart=
	I0329 17:46:30.833775  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:46:30.833787  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:46:30.833798  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:46:30.833812  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:46:30.833819  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:46:30.833829  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:46:30.833838  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:46:30.833847  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:46:30.833858  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:46:30.833867  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:46:30.833873  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:46:30.833884  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:46:30.833888  652427 command_runner.go:130] > Delegate=yes
	I0329 17:46:30.833900  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:46:30.833909  652427 command_runner.go:130] > KillMode=process
	I0329 17:46:30.833917  652427 command_runner.go:130] > [Install]
	I0329 17:46:30.833924  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:46:30.833949  652427 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 17:46:30.834006  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 17:46:30.843529  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 17:46:30.855086  652427 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0329 17:46:30.855110  652427 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0329 17:46:30.855881  652427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 17:46:30.932521  652427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 17:46:31.008655  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:46:31.018037  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:46:31.018058  652427 command_runner.go:130] > [Unit]
	I0329 17:46:31.018064  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:46:31.018069  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:46:31.018073  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:46:31.018081  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:46:31.018091  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:46:31.018104  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:46:31.018114  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:46:31.018120  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:46:31.018127  652427 command_runner.go:130] > [Service]
	I0329 17:46:31.018131  652427 command_runner.go:130] > Type=notify
	I0329 17:46:31.018136  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:46:31.018141  652427 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0329 17:46:31.018151  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:46:31.018160  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:46:31.018170  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:46:31.018184  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:46:31.018199  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:46:31.018213  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:46:31.018226  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:46:31.018240  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:46:31.018250  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:46:31.018259  652427 command_runner.go:130] > ExecStart=
	I0329 17:46:31.018276  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:46:31.018291  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:46:31.018302  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:46:31.018316  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:46:31.018327  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:46:31.018335  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:46:31.018339  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:46:31.018347  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:46:31.018352  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:46:31.018358  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:46:31.018363  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:46:31.018373  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:46:31.018383  652427 command_runner.go:130] > Delegate=yes
	I0329 17:46:31.018397  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:46:31.018408  652427 command_runner.go:130] > KillMode=process
	I0329 17:46:31.018428  652427 command_runner.go:130] > [Install]
	I0329 17:46:31.018439  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:46:31.018493  652427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 17:46:31.094457  652427 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 17:46:31.103969  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:46:31.141113  652427 command_runner.go:130] > 20.10.13
	I0329 17:46:31.142886  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:46:31.180277  652427 command_runner.go:130] > 20.10.13
	I0329 17:46:31.185191  652427 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 17:46:31.186476  652427 out.go:176]   - env NO_PROXY=192.168.49.2
	I0329 17:46:31.186531  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:46:31.218241  652427 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0329 17:46:31.221554  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:46:31.231052  652427 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087 for IP: 192.168.49.3
	I0329 17:46:31.231159  652427 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 17:46:31.231201  652427 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 17:46:31.231215  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0329 17:46:31.231226  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0329 17:46:31.231238  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0329 17:46:31.231249  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0329 17:46:31.231296  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 17:46:31.231330  652427 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 17:46:31.231349  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 17:46:31.231372  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 17:46:31.231394  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 17:46:31.231416  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 17:46:31.231452  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:46:31.231479  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem -> /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.231488  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.231503  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.231880  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 17:46:31.248809  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 17:46:31.265517  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 17:46:31.282382  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 17:46:31.298857  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 17:46:31.315403  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 17:46:31.332058  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 17:46:31.348770  652427 ssh_runner.go:195] Run: openssl version
	I0329 17:46:31.353423  652427 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0329 17:46:31.353480  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 17:46:31.360293  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363230  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363359  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363408  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.367738  652427 command_runner.go:130] > 51391683
	I0329 17:46:31.367944  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 17:46:31.374845  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 17:46:31.381870  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384584  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384689  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384738  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.389169  652427 command_runner.go:130] > 3ec20f2e
	I0329 17:46:31.389423  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 17:46:31.396284  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 17:46:31.403143  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.405952  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.406065  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.406102  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.410661  652427 command_runner.go:130] > b5213941
	I0329 17:46:31.410719  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 17:46:31.417818  652427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 17:46:31.496155  652427 command_runner.go:130] > cgroupfs
	I0329 17:46:31.498030  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:46:31.498045  652427 cni.go:154] 2 nodes found, recommending kindnet
	I0329 17:46:31.498058  652427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 17:46:31.498074  652427 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220329174520-564087 NodeName:multinode-20220329174520-564087-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 17:46:31.498180  652427 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220329174520-564087-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 17:46:31.498239  652427 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220329174520-564087-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 17:46:31.498290  652427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 17:46:31.505442  652427 command_runner.go:130] > kubeadm
	I0329 17:46:31.505465  652427 command_runner.go:130] > kubectl
	I0329 17:46:31.505470  652427 command_runner.go:130] > kubelet
	I0329 17:46:31.505491  652427 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 17:46:31.505541  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0329 17:46:31.512273  652427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I0329 17:46:31.524469  652427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 17:46:31.536739  652427 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 17:46:31.539551  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:46:31.549072  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:46:31.549306  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:31.549364  652427 start.go:282] JoinCluster: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:46:31.549449  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0329 17:46:31.549494  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:46:31.580977  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:46:31.711236  652427 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 
	I0329 17:46:31.715409  652427 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:31.715460  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220329174520-564087-m02"
	I0329 17:46:31.921830  652427 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1021-gcp\n", err: exit status 1
	I0329 17:46:31.989345  652427 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0329 17:46:37.964200  652427 command_runner.go:130] > [preflight] Running pre-flight checks
	I0329 17:46:37.964226  652427 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0329 17:46:37.964233  652427 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1021-gcp
	I0329 17:46:37.964237  652427 command_runner.go:130] > DOCKER_VERSION: 20.10.13
	I0329 17:46:37.964246  652427 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0329 17:46:37.964258  652427 command_runner.go:130] > OS: Linux
	I0329 17:46:37.964266  652427 command_runner.go:130] > CGROUPS_CPU: enabled
	I0329 17:46:37.964278  652427 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0329 17:46:37.964289  652427 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0329 17:46:37.964307  652427 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0329 17:46:37.964316  652427 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0329 17:46:37.964321  652427 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0329 17:46:37.964327  652427 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0329 17:46:37.964334  652427 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0329 17:46:37.964339  652427 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0329 17:46:37.964349  652427 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0329 17:46:37.964359  652427 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0329 17:46:37.964366  652427 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0329 17:46:37.964374  652427 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0329 17:46:37.964385  652427 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0329 17:46:37.964392  652427 command_runner.go:130] > This node has joined the cluster:
	I0329 17:46:37.964398  652427 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0329 17:46:37.964408  652427 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0329 17:46:37.964418  652427 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0329 17:46:37.964438  652427 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220329174520-564087-m02": (6.248966616s)
	I0329 17:46:37.964458  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0329 17:46:38.132714  652427 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0329 17:46:38.132755  652427 start.go:284] JoinCluster complete in 6.583390289s
	I0329 17:46:38.132766  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:46:38.132775  652427 cni.go:154] 2 nodes found, recommending kindnet
	I0329 17:46:38.132829  652427 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0329 17:46:38.136185  652427 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0329 17:46:38.136207  652427 command_runner.go:130] >   Size: 2675000   	Blocks: 5232       IO Block: 4096   regular file
	I0329 17:46:38.136214  652427 command_runner.go:130] > Device: 34h/52d	Inode: 8004372     Links: 1
	I0329 17:46:38.136220  652427 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0329 17:46:38.136225  652427 command_runner.go:130] > Access: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:46:38.136231  652427 command_runner.go:130] > Modify: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:46:38.136235  652427 command_runner.go:130] > Change: 2022-03-21 20:07:13.664642338 +0000
	I0329 17:46:38.136239  652427 command_runner.go:130] >  Birth: -
	I0329 17:46:38.136316  652427 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 17:46:38.136333  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0329 17:46:38.148856  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 17:46:38.285034  652427 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0329 17:46:38.285081  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0329 17:46:38.285090  652427 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0329 17:46:38.285097  652427 command_runner.go:130] > daemonset.apps/kindnet configured
	I0329 17:46:38.285134  652427 start.go:208] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:38.287285  652427 out.go:176] * Verifying Kubernetes components...
	I0329 17:46:38.287341  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:38.297040  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:46:38.297379  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:46:38.297695  652427 node_ready.go:35] waiting up to 6m0s for node "multinode-20220329174520-564087-m02" to be "Ready" ...
	I0329 17:46:38.297758  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:38.297766  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:38.297772  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:38.299797  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:38.299814  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:38.299819  652427 round_trippers.go:580]     Audit-Id: 9ed3346a-8583-43a5-bfc3-981f89b068bc
	I0329 17:46:38.299824  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:38.299828  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:38.299832  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:38.299836  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:38.299841  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:38 GMT
	I0329 17:46:38.299938  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:38.801017  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:38.801047  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:38.801082  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:38.803485  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:38.803506  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:38.803512  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:38 GMT
	I0329 17:46:38.803516  652427 round_trippers.go:580]     Audit-Id: 9e1d8393-dad9-44f3-ae3a-976c82ed7ce2
	I0329 17:46:38.803521  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:38.803525  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:38.803530  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:38.803534  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:38.803629  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:39.301256  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:39.301278  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:39.301285  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:39.303396  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:39.303416  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:39.303422  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:39 GMT
	I0329 17:46:39.303426  652427 round_trippers.go:580]     Audit-Id: 456b299e-347e-49bb-9709-7fbf84e791c2
	I0329 17:46:39.303430  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:39.303434  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:39.303439  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:39.303443  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:39.303522  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:39.801216  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:39.801239  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:39.801256  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:39.803912  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:39.803938  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:39.803947  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:39 GMT
	I0329 17:46:39.803955  652427 round_trippers.go:580]     Audit-Id: c7012e48-072b-4332-918c-35a936875441
	I0329 17:46:39.803963  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:39.803971  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:39.803984  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:39.803991  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:39.804105  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:40.300413  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:40.300435  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:40.300442  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:40.302602  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:40.302622  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:40.302630  652427 round_trippers.go:580]     Audit-Id: ab242fc9-aec1-45c6-a423-59cfde21eced
	I0329 17:46:40.302638  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:40.302645  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:40.302651  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:40.302657  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:40.302664  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:40 GMT
	I0329 17:46:40.302775  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:40.303085  652427 node_ready.go:58] node "multinode-20220329174520-564087-m02" has status "Ready":"False"
	I0329 17:46:40.800430  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:40.800487  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:40.800499  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:40.803017  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:40.803045  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:40.803055  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:40.803062  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:40.803069  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:40 GMT
	I0329 17:46:40.803080  652427 round_trippers.go:580]     Audit-Id: 7765c00f-d344-4c76-a107-e01a50f363a9
	I0329 17:46:40.803090  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:40.803101  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:40.803223  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:41.300692  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:41.300718  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:41.300728  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:41.303390  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:41.303417  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:41.303428  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:41.303436  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:41.303443  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:41.303451  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:41.303463  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:41 GMT
	I0329 17:46:41.303474  652427 round_trippers.go:580]     Audit-Id: 9af8188b-83c5-4116-831e-262f07849a0b
	I0329 17:46:41.303603  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:41.801147  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:41.801171  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:41.801181  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:41.802847  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:41.802871  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:41.802879  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:41.802885  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:41.802896  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:41.802903  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:41 GMT
	I0329 17:46:41.802919  652427 round_trippers.go:580]     Audit-Id: 795a6e34-8a29-4035-8216-c4f80c32c844
	I0329 17:46:41.802929  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:41.803043  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:42.300581  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:42.300608  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:42.300617  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:42.302837  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:42.302859  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:42.302867  652427 round_trippers.go:580]     Audit-Id: 09fe99d6-4a61-4ac3-a7f3-eb82ad08369f
	I0329 17:46:42.302875  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:42.302882  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:42.302888  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:42.302894  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:42.302900  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:42 GMT
	I0329 17:46:42.302999  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:42.303283  652427 node_ready.go:58] node "multinode-20220329174520-564087-m02" has status "Ready":"False"
	I0329 17:46:42.800600  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:42.800627  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:42.800634  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:42.803009  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:42.803031  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:42.803038  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:42.803042  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:42.803047  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:42 GMT
	I0329 17:46:42.803051  652427 round_trippers.go:580]     Audit-Id: 2b9d3025-853f-4611-b763-e473d841bcd7
	I0329 17:46:42.803055  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:42.803059  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:42.803193  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:43.300809  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:43.300835  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:43.300843  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:43.303032  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:43.303055  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:43.303064  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:43.303071  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:43.303078  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:43.303085  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:43 GMT
	I0329 17:46:43.303096  652427 round_trippers.go:580]     Audit-Id: 8b42898e-bb8b-4233-a388-98e03b7bafa5
	I0329 17:46:43.303102  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:43.303214  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:43.800773  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:43.800799  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:43.800806  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:43.803188  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:43.803207  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:43.803216  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:43 GMT
	I0329 17:46:43.803223  652427 round_trippers.go:580]     Audit-Id: 7778e186-b125-4fbd-a697-a484e631b7bd
	I0329 17:46:43.803230  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:43.803237  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:43.803248  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:43.803252  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:43.803375  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:44.300464  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:44.300495  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.300504  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.302786  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.302817  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.302827  652427 round_trippers.go:580]     Audit-Id: 1fe7df79-f0b1-42e6-83d9-c022421fed27
	I0329 17:46:44.302834  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.302842  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.302851  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.302865  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.302872  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.303002  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"573","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.alpha.kubernetes.io/cri-socket":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os"
:{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernet [truncated 4429 chars]
	I0329 17:46:44.303345  652427 node_ready.go:49] node "multinode-20220329174520-564087-m02" has status "Ready":"True"
	I0329 17:46:44.303367  652427 node_ready.go:38] duration metric: took 6.005654305s waiting for node "multinode-20220329174520-564087-m02" to be "Ready" ...
	I0329 17:46:44.303377  652427 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:44.303439  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:44.303449  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.303456  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.306030  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.306048  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.306053  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.306058  652427 round_trippers.go:580]     Audit-Id: 4c627f37-9990-4c01-9b79-61a0e561565e
	I0329 17:46:44.306062  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.306067  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.306072  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.306076  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.306543  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 69183 chars]
	I0329 17:46:44.308707  652427 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.308780  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:44.308792  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.308803  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.310448  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.310470  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.310479  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.310496  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.310502  652427 round_trippers.go:580]     Audit-Id: 786c23a6-9f1c-420c-9489-92d88f2e926c
	I0329 17:46:44.310510  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.310521  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.310532  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.310646  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5986 chars]
	I0329 17:46:44.311012  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.311024  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.311031  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.312477  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.312494  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.312503  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.312510  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.312517  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.312523  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.312531  652427 round_trippers.go:580]     Audit-Id: 05e711fa-a117-4fc2-9020-f1cc213b7b5f
	I0329 17:46:44.312547  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.312663  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.312951  652427 pod_ready.go:92] pod "coredns-64897985d-6tcql" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.312964  652427 pod_ready.go:81] duration metric: took 4.235056ms waiting for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.312972  652427 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.313010  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220329174520-564087
	I0329 17:46:44.313019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.313025  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.314608  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.314629  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.314637  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.314645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.314652  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.314660  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.314668  652427 round_trippers.go:580]     Audit-Id: 54f8b45f-dfb9-477b-98de-bac98b015d50
	I0329 17:46:44.314672  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.314810  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220329174520-564087","namespace":"kube-system","uid":"ac5cd989-3ac7-4d02-94c0-0c2843391dfe","resourceVersion":"433","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.mirror":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.seen":"2022-03-29T17:45:44.427846936Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"Fiel
dsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube [truncated 5818 chars]
	I0329 17:46:44.315152  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.315164  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.315171  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.316638  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.316657  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.316670  652427 round_trippers.go:580]     Audit-Id: ceadb479-8a1c-4ac1-bdd1-dc3e4a2ee72a
	I0329 17:46:44.316677  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.316685  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.316695  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.316703  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.316715  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.316807  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.317099  652427 pod_ready.go:92] pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.317114  652427 pod_ready.go:81] duration metric: took 4.135789ms waiting for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.317131  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.317181  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220329174520-564087
	I0329 17:46:44.317191  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.317200  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.318684  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.318699  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.318705  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.318709  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.318714  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.318721  652427 round_trippers.go:580]     Audit-Id: c541c22d-2540-4459-bd7f-d72d6f23d26c
	I0329 17:46:44.318735  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.318742  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.318871  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220329174520-564087","namespace":"kube-system","uid":"112c5d83-654f-4235-9e38-a435d3f2d433","resourceVersion":"363","creationTimestamp":"2022-03-29T17:45:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.mirror":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.seen":"2022-03-29T17:45:37.376573317Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17
:45:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 8327 chars]
	I0329 17:46:44.319240  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.319255  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.319265  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.320591  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.320614  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.320622  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.320630  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.320638  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.320652  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.320660  652427 round_trippers.go:580]     Audit-Id: 59443bad-5834-4b07-b56c-6ffa0b39bcb9
	I0329 17:46:44.320671  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.320755  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.321007  652427 pod_ready.go:92] pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.321020  652427 pod_ready.go:81] duration metric: took 3.878271ms waiting for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.321029  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.321090  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220329174520-564087
	I0329 17:46:44.321101  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.321106  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.322554  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.322574  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.322582  652427 round_trippers.go:580]     Audit-Id: bff57800-df0a-44d0-9dab-3058b46c38da
	I0329 17:46:44.322589  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.322595  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.322607  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.322613  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.322628  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.322738  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220329174520-564087","namespace":"kube-system","uid":"66589d1b-e363-4195-bbc3-4ff12b3bf3cf","resourceVersion":"370","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.mirror":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.seen":"2022-03-29T17:45:44.427888140Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 7902 chars]
	I0329 17:46:44.323105  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.323118  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.323124  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.324414  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.324429  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.324434  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.324440  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.324444  652427 round_trippers.go:580]     Audit-Id: 49b6745a-a2cc-481c-af87-d590a566744c
	I0329 17:46:44.324448  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.324455  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.324461  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.324615  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.324852  652427 pod_ready.go:92] pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.324864  652427 pod_ready.go:81] duration metric: took 3.830211ms waiting for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.324872  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.501261  652427 request.go:597] Waited for 176.328506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:44.501316  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:44.501321  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.501331  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.503511  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.503534  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.503541  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.503546  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.503551  652427 round_trippers.go:580]     Audit-Id: afc98a7b-2642-49d3-92e9-6c1e188fb8ec
	I0329 17:46:44.503556  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.503561  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.503568  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.503731  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29kjv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ca1dbe90-6525-4660-81a7-68b2c47378da","resourceVersion":"468","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5551 chars]
	I0329 17:46:44.701508  652427 request.go:597] Waited for 197.342922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.701580  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.701586  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.701593  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.703909  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.703931  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.703940  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.703948  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.703955  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.703963  652427 round_trippers.go:580]     Audit-Id: 76f26ed9-3ce7-4640-b6f3-3a68644659a1
	I0329 17:46:44.703970  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.703979  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.704096  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.704517  652427 pod_ready.go:92] pod "kube-proxy-29kjv" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.704533  652427 pod_ready.go:81] duration metric: took 379.655564ms waiting for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.704545  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cww7z" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.901340  652427 request.go:597] Waited for 196.719477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cww7z
	I0329 17:46:44.901398  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cww7z
	I0329 17:46:44.901403  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.901413  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.903787  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.903810  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.903819  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.903827  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.903834  652427 round_trippers.go:580]     Audit-Id: deefd7ca-0c80-4b17-858a-f1e7d465150b
	I0329 17:46:44.903841  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.903848  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.903853  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.903983  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cww7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"3f51eeab-69b9-40eb-87db-67785022f8e2","resourceVersion":"556","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5559 chars]
	I0329 17:46:45.100792  652427 request.go:597] Waited for 196.346103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:45.100862  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:45.100868  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.100876  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.103106  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.103134  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.103143  652427 round_trippers.go:580]     Audit-Id: 27dc1ef8-8b6a-4a6e-a22c-4a12caebe9b8
	I0329 17:46:45.103151  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.103162  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.103168  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.103175  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.103186  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.103290  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"573","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.alpha.kubernetes.io/cri-socket":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os"
:{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernet [truncated 4429 chars]
	I0329 17:46:45.103612  652427 pod_ready.go:92] pod "kube-proxy-cww7z" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:45.103623  652427 pod_ready.go:81] duration metric: took 399.065611ms waiting for pod "kube-proxy-cww7z" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.103631  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.301039  652427 request.go:597] Waited for 197.342623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:45.301121  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:45.301126  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.301134  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.303446  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.303470  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.303476  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.303481  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.303486  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.303495  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.303499  652427 round_trippers.go:580]     Audit-Id: 1bbaa79f-563f-44d1-adbd-4c5e117b109f
	I0329 17:46:45.303504  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.303611  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220329174520-564087","namespace":"kube-system","uid":"4ba1ded4-06a3-44a7-922f-b02863ff0da0","resourceVersion":"369","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.mirror":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.seen":"2022-03-29T17:45:44.427890891Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:
kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kub [truncated 4784 chars]
	I0329 17:46:45.501025  652427 request.go:597] Waited for 196.934621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:45.501119  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:45.501131  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.501142  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.503598  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.503618  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.503626  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.503633  652427 round_trippers.go:580]     Audit-Id: c2c5126f-c7bd-4da7-b6e4-1bf2dc20c829
	I0329 17:46:45.503641  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.503648  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.503655  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.503666  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.503771  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:45.504184  652427 pod_ready.go:92] pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:45.504203  652427 pod_ready.go:81] duration metric: took 400.563209ms waiting for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.504213  652427 pod_ready.go:38] duration metric: took 1.20081773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:45.504241  652427 system_svc.go:44] waiting for kubelet service to be running ....
	I0329 17:46:45.504297  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:45.513916  652427 system_svc.go:56] duration metric: took 9.669505ms WaitForService to wait for kubelet.
	I0329 17:46:45.513941  652427 kubeadm.go:548] duration metric: took 7.228773031s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0329 17:46:45.513961  652427 node_conditions.go:102] verifying NodePressure condition ...
	I0329 17:46:45.701415  652427 request.go:597] Waited for 187.354646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:45.701485  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:45.701491  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.701503  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.703911  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.703935  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.703942  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.703947  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.703956  652427 round_trippers.go:580]     Audit-Id: 50a63ad2-28f7-47f6-9363-0a1a0ecf766f
	I0329 17:46:45.703963  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.703977  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.703985  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.704161  652427 request.go:1181] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"575"},"items":[{"metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","vol
umes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi [truncated 10717 chars]
	I0329 17:46:45.704622  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:45.704639  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:45.704650  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:45.704654  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:45.704658  652427 node_conditions.go:105] duration metric: took 190.693449ms to run NodePressure ...
	I0329 17:46:45.704668  652427 start.go:213] waiting for startup goroutines ...
	I0329 17:46:45.738572  652427 start.go:498] kubectl: 1.23.5, cluster: 1.23.5 (minor skew: 0)
	I0329 17:46:45.740746  652427 out.go:176] * Done! kubectl is now configured to use "multinode-20220329174520-564087" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:45:30 UTC, end at Tue 2022-03-29 17:52:52 UTC. --
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[214]: time="2022-03-29T17:45:31.806176191Z" level=info msg="Daemon shutdown complete"
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: docker.service: Succeeded.
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Stopped Docker Application Container Engine.
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Starting Docker Application Container Engine...
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.849726597Z" level=info msg="Starting up"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851663663Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851688615Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851714897Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851724738Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852880813Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852911935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852932126Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852954703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.858635838Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863255611Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863276301Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863281546Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863429327Z" level=info msg="Loading containers: start."
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.941376619Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.973804350Z" level=info msg="Loading containers: done."
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.984414784Z" level=info msg="Docker daemon" commit=906f57f graphdriver(s)=overlay2 version=20.10.13
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.984477950Z" level=info msg="Daemon has completed initialization"
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Started Docker Application Container Engine.
	Mar 29 17:45:32 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:32.001133060Z" level=info msg="API listen on [::]:2376"
	Mar 29 17:45:32 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:32.004517067Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	30d9cee296bef       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago       Running             busybox                   0                   d80189d9f4f6c
	cca2695fb4971       a4ca41631cc7a                                                                                         6 minutes ago       Running             coredns                   0                   0679bc810aadd
	4b576a888064c       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   c87d2b87926b1
	d50e3f59b2ce9       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c              6 minutes ago       Running             kindnet-cni               0                   2e96d073b624d
	17bbc3cf565ae       3c53fa8541f95                                                                                         6 minutes ago       Running             kube-proxy                0                   fd8b515a73b16
	b7d139996016a       3fc1d62d65872                                                                                         7 minutes ago       Running             kube-apiserver            0                   815a884ca3b74
	aff007f20f144       25f8c7f3da61c                                                                                         7 minutes ago       Running             etcd                      0                   8ab8a1d1f3db1
	c36ea01d8947b       b0c9e5e4dbb14                                                                                         7 minutes ago       Running             kube-controller-manager   0                   79225c76a0a7c
	9180528fcd7d6       884d49d6d8c9f                                                                                         7 minutes ago       Running             kube-scheduler            0                   d037fed5efd16
	
	* 
	* ==> coredns [cca2695fb497] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220329174520-564087
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220329174520-564087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=multinode-20220329174520-564087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_45_45_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:45:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220329174520-564087
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:52:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:46:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20220329174520-564087
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                c6a4332a-c343-40f9-a72a-fc1b4f5a5f06
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-cbpdd                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 coredns-64897985d-6tcql                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     6m55s
	  kube-system                 etcd-multinode-20220329174520-564087                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m7s
	  kube-system                 kindnet-7hm65                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m55s
	  kube-system                 kube-apiserver-multinode-20220329174520-564087             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-controller-manager-multinode-20220329174520-564087    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-proxy-29kjv                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-scheduler-multinode-20220329174520-564087             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m53s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  7m15s (x4 over 7m15s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x4 over 7m15s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s (x3 over 7m15s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m8s                   kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s                   kubelet     Node multinode-20220329174520-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s                   kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m37s                  kubelet     Node multinode-20220329174520-564087 status is now: NodeReady
	
	
	Name:               multinode-20220329174520-564087-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220329174520-564087-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220329174520-564087-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:52:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20220329174520-564087-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                7a1b4424-c2ff-4b69-97d2-491c41ec39a6
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-bgzlj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-vp76g               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m19s
	  kube-system                 kube-proxy-cww7z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m16s                  kube-proxy  
	  Normal  Starting                 6m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m19s (x2 over 6m19s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x2 over 6m19s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x2 over 6m19s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m9s                   kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.292620] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004800] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004213] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000005] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[Mar29 17:52] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004838] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000005] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004786] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.001312] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004797] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004872] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004804] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000005] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004800] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.002933] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	
	* 
	* ==> etcd [aff007f20f14] <==
	* {"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:multinode-20220329174520-564087 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.482Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:45:39.482Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.924274ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128011987960469521 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:477 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128011987960469519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[1682928115] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"183.598769ms","start":"2022-03-29T17:46:23.048Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[1682928115] 'read index received'  (duration: 87.004511ms)","trace[1682928115] 'applied index is now lower than readState.Index'  (duration: 96.593273ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[2023804428] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"359.546149ms","start":"2022-03-29T17:46:22.873Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[2023804428] 'process raft request'  (duration: 165.064256ms)","trace[2023804428] 'compare'  (duration: 193.811363ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.732616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[1007111003] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"183.765438ms","start":"2022-03-29T17:46:23.048Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[1007111003] 'agreement among raft nodes before linearized reading'  (duration: 183.656295ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T17:46:22.872Z","time spent":"359.667331ms","remote":"127.0.0.1:34902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:477 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128011987960469519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"info","ts":"2022-03-29T17:46:24.173Z","caller":"traceutil/trace.go:171","msg":"trace[1074712309] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:531; }","duration":"124.451109ms","start":"2022-03-29T17:46:24.049Z","end":"2022-03-29T17:46:24.173Z","steps":["trace[1074712309] 'read index received'  (duration: 124.430089ms)","trace[1074712309] 'applied index is now lower than readState.Index'  (duration: 18.679µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:46:24.275Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"226.645468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:46:24.275Z","caller":"traceutil/trace.go:171","msg":"trace[332262182] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:509; }","duration":"226.734293ms","start":"2022-03-29T17:46:24.049Z","end":"2022-03-29T17:46:24.275Z","steps":["trace[332262182] 'agreement among raft nodes before linearized reading'  (duration: 124.556797ms)","trace[332262182] 'range keys from in-memory index tree'  (duration: 102.063021ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:52:52 up  2:35,  0 users,  load average: 0.11, 0.43, 0.61
	Linux multinode-20220329174520-564087 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b7d139996016] <==
	* I0329 17:45:41.543920       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0329 17:45:41.543965       1 cache.go:39] Caches are synced for autoregister controller
	I0329 17:45:41.543971       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0329 17:45:41.543988       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0329 17:45:41.545569       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0329 17:45:41.557601       1 controller.go:611] quota admission added evaluator for: namespaces
	I0329 17:45:42.415713       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 17:45:42.415754       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 17:45:42.420806       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 17:45:42.423826       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 17:45:42.423842       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 17:45:42.772732       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 17:45:42.800352       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 17:45:42.875483       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 17:45:42.880152       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0329 17:45:42.881029       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 17:45:42.884400       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 17:45:43.558513       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 17:45:44.248578       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 17:45:44.255064       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 17:45:44.264704       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 17:45:44.456809       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 17:45:57.546128       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0329 17:45:57.562452       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 17:45:58.972746       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [c36ea01d8947] <==
	* I0329 17:45:57.660898       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:45:57.660936       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 17:45:57.660951       1 disruption.go:371] Sending events to api server.
	I0329 17:45:57.744428       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 17:45:57.744552       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 17:45:57.744750       1 shared_informer.go:247] Caches are synced for expand 
	I0329 17:45:57.744901       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:45:57.745409       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:45:57.757625       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0329 17:45:57.872383       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0329 17:45:57.881350       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-c7txq"
	I0329 17:45:58.158220       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:45:58.158249       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:45:58.164401       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:46:17.546673       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0329 17:46:33.586591       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220329174520-564087-m02" does not exist
	I0329 17:46:33.591936       1 range_allocator.go:374] Set node multinode-20220329174520-564087-m02 PodCIDR to [10.244.1.0/24]
	I0329 17:46:33.595738       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cww7z"
	I0329 17:46:33.595769       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vp76g"
	W0329 17:46:37.548383       1 node_lifecycle_controller.go:1012] Missing timestamp for Node multinode-20220329174520-564087-m02. Assuming now as a timestamp.
	I0329 17:46:37.548431       1 event.go:294] "Event occurred" object="multinode-20220329174520-564087-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220329174520-564087-m02 event: Registered Node multinode-20220329174520-564087-m02 in Controller"
	I0329 17:46:46.543461       1 event.go:294] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7978565885 to 2"
	I0329 17:46:46.549046       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-bgzlj"
	I0329 17:46:46.552444       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-cbpdd"
	I0329 17:46:47.558414       1 event.go:294] "Event occurred" object="default/busybox-7978565885-bgzlj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7978565885-bgzlj"
	
	* 
	* ==> kube-proxy [17bbc3cf565a] <==
	* I0329 17:45:58.945092       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:45:58.945172       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:45:58.945209       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:45:58.967823       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:45:58.967857       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:45:58.967867       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:45:58.967893       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:45:58.969670       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:45:58.970267       1 config.go:317] "Starting service config controller"
	I0329 17:45:58.970292       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:45:58.970320       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:45:58.970349       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:45:59.071096       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:45:59.071109       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9180528fcd7d] <==
	* E0329 17:45:41.560233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0329 17:45:41.560266       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:41.560272       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:41.560295       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:41.560300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:41.559981       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:45:41.560321       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:45:41.559663       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 17:45:41.560365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 17:45:41.560501       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:45:41.560547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:45:42.371446       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:45:42.371473       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:45:42.381520       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:45:42.381550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:45:42.427688       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:45:42.427714       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:45:42.511401       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0329 17:45:42.511427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:42.525586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:45:42.525626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:45:42.543995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:42.544034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:44.012532       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0329 17:45:44.553608       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:45:30 UTC, end at Tue 2022-03-29 17:52:52 UTC. --
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.644920    1922 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.653220    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: E0329 17:45:57.658674    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.659426    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846335    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-lib-modules\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846403    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l82l\" (UniqueName: \"kubernetes.io/projected/ca1dbe90-6525-4660-81a7-68b2c47378da-kube-api-access-5l82l\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846436    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca1dbe90-6525-4660-81a7-68b2c47378da-kube-proxy\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846467    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca1dbe90-6525-4660-81a7-68b2c47378da-lib-modules\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846501    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-cni-cfg\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846524    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-xtables-lock\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846556    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca1dbe90-6525-4660-81a7-68b2c47378da-xtables-lock\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846588    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdfm\" (UniqueName: \"kubernetes.io/projected/8d9c821d-cc40-4073-95ab-b810b61210a7-kube-api-access-8qdfm\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:59 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:59.399736    1922 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Mar 29 17:45:59 multinode-20220329174520-564087 kubelet[1922]: E0329 17:45:59.887924    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:46:04 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:04.400920    1922 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Mar 29 17:46:04 multinode-20220329174520-564087 kubelet[1922]: E0329 17:46:04.898395    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.344652    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.344849    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440200    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rbd\" (UniqueName: \"kubernetes.io/projected/7d9d3f42-beb4-4d9d-82ac-3984ac52c132-kube-api-access-n8rbd\") pod \"storage-provisioner\" (UID: \"7d9d3f42-beb4-4d9d-82ac-3984ac52c132\") " pod="kube-system/storage-provisioner"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440262    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d9d3f42-beb4-4d9d-82ac-3984ac52c132-tmp\") pod \"storage-provisioner\" (UID: \"7d9d3f42-beb4-4d9d-82ac-3984ac52c132\") " pod="kube-system/storage-provisioner"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440359    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q74j\" (UniqueName: \"kubernetes.io/projected/a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2-kube-api-access-5q74j\") pod \"coredns-64897985d-6tcql\" (UID: \"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2\") " pod="kube-system/coredns-64897985d-6tcql"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440412    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2-config-volume\") pod \"coredns-64897985d-6tcql\" (UID: \"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2\") " pod="kube-system/coredns-64897985d-6tcql"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.965211    1922 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0679bc810aadd4b766bcaec4315c4bc3a9c4a9401c9acec103467e82125419cc"
	Mar 29 17:46:46 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:46.556823    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:46 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:46.717263    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/7d54ecec-d81f-404f-8b4f-566eed570a96-kube-api-access-f4dm4\") pod \"busybox-7978565885-cbpdd\" (UID: \"7d54ecec-d81f-404f-8b4f-566eed570a96\") " pod="default/busybox-7978565885-cbpdd"
	
	* 
	* ==> storage-provisioner [4b576a888064] <==
	* I0329 17:46:15.966882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:46:15.976253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:46:15.976304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:46:15.988818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:46:15.989014       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df!
	I0329 17:46:15.989335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e0b69ac-dafe-4f7b-bbd2-dd67c3d402a9", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df became leader
	I0329 17:46:16.089189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220329174520-564087 -n multinode-20220329174520-564087
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20220329174520-564087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20220329174520-564087 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20220329174520-564087 describe pod : exit status 1 (38.537384ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20220329174520-564087 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (367.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (123.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:553: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.248598263s)
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- sh -c "ping -c 1 <nil>"
multinode_test.go:561: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-bgzlj -- sh -c "ping -c 1 <nil>": exit status 2 (186.072586ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:562: Failed to ping host (<nil>) from pod (busybox-7978565885-bgzlj): exit status 2
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0329 17:54:18.011333  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:54:30.085210  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
multinode_test.go:553: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.229649515s)
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- sh -c "ping -c 1 <nil>"
multinode_test.go:561: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220329174520-564087 -- exec busybox-7978565885-cbpdd -- sh -c "ping -c 1 <nil>": exit status 2 (182.604396ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:562: Failed to ping host (<nil>) from pod (busybox-7978565885-cbpdd): exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20220329174520-564087
helpers_test.go:236: (dbg) docker inspect multinode-20220329174520-564087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e",
	        "Created": "2022-03-29T17:45:29.644975292Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 653073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:45:29.996104263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/hosts",
	        "LogPath": "/var/lib/docker/containers/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e-json.log",
	        "Name": "/multinode-20220329174520-564087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220329174520-564087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220329174520-564087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4-init/diff:/var/lib/docker/overlay2/9db4e23be625e034f4ded606113a10eac42e47ab03824d2ab674189ac3bfe07b/diff:/var/lib/docker/overlay2/23cb119bfb0f25fd9defc73c170f1edc0bcfc13d6d5cd5613108d72d2020b31c/diff:/var/lib/docker/overlay2/bc76d55655624ec99d26daa97a683f1a970449af5a278430e255d62e3f8b7357/diff:/var/lib/docker/overlay2/ec38188e1f99f15e49cbf2bb0c04cafd5ff241ea7966de30f2b4201c74cb77cb/diff:/var/lib/docker/overlay2/a5d5403dacc48240e9b97d1b8e55974405d1cf196bfcfa0ca32548f269cc1071/diff:/var/lib/docker/overlay2/9b4ccea6c0eb5887c76137ed35db5e0e51cf583e7c5034dcee8dd746f9a5c3bb/diff:/var/lib/docker/overlay2/8938344848e3a72fe363a3ed45041a50457e8ce2a391113dd515f7afd6d909db/diff:/var/lib/docker/overlay2/b6696995e5a26e0378be0861a49fb24498de5c915b3c02bd34ae778e05b48a9d/diff:/var/lib/docker/overlay2/f95310f65d1c113884a9ac4dc0f127daf9d1b3f623762106478e4fe41692cc2d/diff:/var/lib/docker/overlay2/30ef7d
70756fc9f43cfd45ede0c78a5dbd376911f1844027d7dd8448f0d1bd2c/diff:/var/lib/docker/overlay2/aeeca576548699f29ecc5f8389942ed3bfde02e1b481e0e8365142a90064496c/diff:/var/lib/docker/overlay2/5ba2587df64129d8cf8c96c14448186757d9b360c9e3101c4a20b1edd728ce18/diff:/var/lib/docker/overlay2/64d1213878e17d1927644c40bb0d52e6a3a124b5e86daa58f166ee0704d9da9b/diff:/var/lib/docker/overlay2/7ac9b531b4439100cfb4789e5009915d72b467705e391e0d197a760783cb4e4b/diff:/var/lib/docker/overlay2/f6f1442868cd491bc73dc995e7c0b552c0d2843d43327267ee3d015edc11da4e/diff:/var/lib/docker/overlay2/c7c6c9113fac60b95369a3e535649a67c14c4c74da4c7de68bd1aaf14bce0ac3/diff:/var/lib/docker/overlay2/9eba2b84f547941ca647ea1c9eff5275fae385f1b800741ed421672c6437487a/diff:/var/lib/docker/overlay2/8bb3fb7770413b61ccdf84f4a5cccb728206fcecd1f006ca906874d3c5d4481c/diff:/var/lib/docker/overlay2/7ebf161ae3775c9e0f6ebe9e26d40e46766d5f3387c2ea279679d585cbd19866/diff:/var/lib/docker/overlay2/4d1064116e64fbf54de0c8ef70255b6fc77b005725e02a52281bfa0e5de5a7af/diff:/var/lib/d
ocker/overlay2/f82ba82619b078a905b7e5a1466fc8ca89d8664fa04dc61cf5914aa0c34ae177/diff:/var/lib/docker/overlay2/728d17980e4c7c100416d2fd1be83673103f271144543fb61798e4a0303c1d63/diff:/var/lib/docker/overlay2/d7e175c39be427bc2372876df06eb27ba2b10462c347d1ee8e43a957642f2ca5/diff:/var/lib/docker/overlay2/1e872f98bd0c0432c85e2812af12d33dcacc384f762347889c846540583137be/diff:/var/lib/docker/overlay2/f5da27e443a249317e2670de2816cbae827a62edb0e4475ac004418a25e279d8/diff:/var/lib/docker/overlay2/33e17a308b62964f37647c1f62c13733476a7eaadb28f29ad1d1f21b5d0456ee/diff:/var/lib/docker/overlay2/6b6bb10e19be67a77e94bd177e583241953840e08b30d68eca16b63e2c5fd574/diff:/var/lib/docker/overlay2/8e061338d4e4cf068f61861fc08144097ee117189101f3a71f361481dc288fd3/diff:/var/lib/docker/overlay2/27d99a6f864614a9dad7efdece7ace23256ff5489d66daed625285168e2fcc48/diff:/var/lib/docker/overlay2/8642d51376c5c35316cb2d9d5832c7382cb5e0d9df1b766f5187ab10eaafb4d6/diff:/var/lib/docker/overlay2/9ffbd3f47292209200a9ab357ba5f68beb15c82f2511804d74dcf2ad3b4
4155f/diff:/var/lib/docker/overlay2/d2512b29dd494ed5dc05b52800efe6a97b07803c1d3172d6a9d9b0b45a7e19eb/diff:/var/lib/docker/overlay2/7e87858609885bf7a576966de8888d2db30e18d8b582b6f6434176c59d71cca5/diff:/var/lib/docker/overlay2/54e00a6514941a66517f8aa879166fd5e8660f7ab673e554aa927bfcb19a145d/diff:/var/lib/docker/overlay2/02ced31172683ffa2fe2365aa827ef66d364bd100865b9095680e2c79f2e868e/diff:/var/lib/docker/overlay2/e65eba629c5d8828d9a2c4b08b322edb4b07793e8bfb091b93fd15013209a387/diff:/var/lib/docker/overlay2/3ee0fd224e7a66a3d8cc598c64cdaf0436eab7f466aa34e3406a0058e16a7f30/diff:/var/lib/docker/overlay2/29b13dceeebd7568b56f69e176c7d37f5b88fe4c13065f01a6f3a36606d5b62c/diff:/var/lib/docker/overlay2/b10262d215789890fd0056a6e4ff379df5e663524b5b96d9671e10c54adc5a25/diff:/var/lib/docker/overlay2/a292b90c390a4decbdd1887aa58471b2827752df1ef18358a1fb82fd665de0b4/diff:/var/lib/docker/overlay2/fbac86c28573a8fd7399f9fd0a51ebb8eef8158b8264c242aa16e16f6227522f/diff:/var/lib/docker/overlay2/b0ddb339636d56ff9132bc75064a21216c2e71
f3b3b53d4a39f9fe66133219c2/diff:/var/lib/docker/overlay2/9e52af85e3d331425d5757a9bde2ace3e5e12622a0d748e6559c2a74907adaa1/diff:/var/lib/docker/overlay2/e856b1e5a3fe78b31306313bdf9bc42d7b1f45dc864587f3ce5dfd3793cb96d3/diff:/var/lib/docker/overlay2/1fbed3ccb397ff1873888dc253845b880a4d30dda3b181220402f7592d8a3ad7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfb6ed0b3c8e71f66522435d2ddc7d7b6bdf61d0602659b152e2c6cf659808c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20220329174520-564087",
	                "Source": "/var/lib/docker/volumes/multinode-20220329174520-564087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220329174520-564087",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220329174520-564087",
	                "name.minikube.sigs.k8s.io": "multinode-20220329174520-564087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfa3887bc3466cbc8d8b255e9ed809e9bd584fbd5da7465eb8488801cc438e51",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49514"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49511"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfa3887bc346",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220329174520-564087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09d1f85080aa",
	                        "multinode-20220329174520-564087"
	                    ],
	                    "NetworkID": "aff26c540dc64674861fa27e2ecf8bdb09cef8a75e776a2ae6774799c98e2445",
	                    "EndpointID": "1d6a16f20312cc311d8f81df64f834ccf84315a0d66f08036bd5e970d53767f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220329174520-564087 -n multinode-20220329174520-564087
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 logs -n 25: (1.115989322s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | docker-network-20220329174330-564087   | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:56 UTC | Tue, 29 Mar 2022 17:43:58 UTC |
	|         | docker-network-20220329174330-564087              |                                        |         |         |                               |                               |
	| start   | -p                                                | existing-network-20220329174358-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:43:59 UTC | Tue, 29 Mar 2022 17:44:24 UTC |
	|         | existing-network-20220329174358-564087            |                                        |         |         |                               |                               |
	|         | --network=existing-network                        |                                        |         |         |                               |                               |
	| delete  | -p                                                | existing-network-20220329174358-564087 | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:24 UTC | Tue, 29 Mar 2022 17:44:27 UTC |
	|         | existing-network-20220329174358-564087            |                                        |         |         |                               |                               |
	| start   | -p                                                | custom-subnet-20220329174427-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:27 UTC | Tue, 29 Mar 2022 17:44:53 UTC |
	|         | custom-subnet-20220329174427-564087               |                                        |         |         |                               |                               |
	|         | --subnet=192.168.60.0/24                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | custom-subnet-20220329174427-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:53 UTC | Tue, 29 Mar 2022 17:44:55 UTC |
	|         | custom-subnet-20220329174427-564087               |                                        |         |         |                               |                               |
	| start   | -p                                                | mount-start-1-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:44:55 UTC | Tue, 29 Mar 2022 17:45:00 UTC |
	|         | mount-start-1-20220329174455-564087               |                                        |         |         |                               |                               |
	|         | --memory=2048 --mount                             |                                        |         |         |                               |                               |
	|         | --mount-gid 0 --mount-msize 6543                  |                                        |         |         |                               |                               |
	|         | --mount-port 46464 --mount-uid 0                  |                                        |         |         |                               |                               |
	|         | --no-kubernetes --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                        |         |         |                               |                               |
	| -p      | mount-start-1-20220329174455-564087               | mount-start-1-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:01 UTC | Tue, 29 Mar 2022 17:45:01 UTC |
	|         | ssh -- ls /minikube-host                          |                                        |         |         |                               |                               |
	| start   | -p                                                | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:01 UTC | Tue, 29 Mar 2022 17:45:06 UTC |
	|         | mount-start-2-20220329174455-564087               |                                        |         |         |                               |                               |
	|         | --memory=2048 --mount                             |                                        |         |         |                               |                               |
	|         | --mount-gid 0 --mount-msize 6543                  |                                        |         |         |                               |                               |
	|         | --mount-port 46465 --mount-uid 0                  |                                        |         |         |                               |                               |
	|         | --no-kubernetes --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                        |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:07 UTC | Tue, 29 Mar 2022 17:45:07 UTC |
	|         | ssh -- ls /minikube-host                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | mount-start-1-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:07 UTC | Tue, 29 Mar 2022 17:45:09 UTC |
	|         | mount-start-1-20220329174455-564087               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:09 UTC | Tue, 29 Mar 2022 17:45:09 UTC |
	|         | ssh -- ls /minikube-host                          |                                        |         |         |                               |                               |
	| stop    | -p                                                | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:09 UTC | Tue, 29 Mar 2022 17:45:11 UTC |
	|         | mount-start-2-20220329174455-564087               |                                        |         |         |                               |                               |
	| start   | -p                                                | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:11 UTC | Tue, 29 Mar 2022 17:45:17 UTC |
	|         | mount-start-2-20220329174455-564087               |                                        |         |         |                               |                               |
	| -p      | mount-start-2-20220329174455-564087               | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:18 UTC | Tue, 29 Mar 2022 17:45:18 UTC |
	|         | ssh -- ls /minikube-host                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | mount-start-2-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:18 UTC | Tue, 29 Mar 2022 17:45:20 UTC |
	|         | mount-start-2-20220329174455-564087               |                                        |         |         |                               |                               |
	| delete  | -p                                                | mount-start-1-20220329174455-564087    | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:20 UTC | Tue, 29 Mar 2022 17:45:20 UTC |
	|         | mount-start-1-20220329174455-564087               |                                        |         |         |                               |                               |
	| start   | -p                                                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:45:20 UTC | Tue, 29 Mar 2022 17:46:45 UTC |
	|         | multinode-20220329174520-564087                   |                                        |         |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                        |         |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                        |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087 -- apply -f    | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:46 UTC | Tue, 29 Mar 2022 17:46:46 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                        |         |         |                               |                               |
	| kubectl | -p                                                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:46 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | multinode-20220329174520-564087                   |                                        |         |         |                               |                               |
	|         | -- rollout status                                 |                                        |         |         |                               |                               |
	|         | deployment/busybox                                |                                        |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:49 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | -- get pods -o                                    |                                        |         |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                        |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:46:49 UTC | Tue, 29 Mar 2022 17:46:49 UTC |
	|         | -- get pods -o                                    |                                        |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                        |         |         |                               |                               |
	| -p      | multinode-20220329174520-564087                   | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:52:51 UTC | Tue, 29 Mar 2022 17:52:52 UTC |
	|         | logs -n 25                                        |                                        |         |         |                               |                               |
	| kubectl | -p multinode-20220329174520-564087                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:52:53 UTC | Tue, 29 Mar 2022 17:52:53 UTC |
	|         | -- get pods -o                                    |                                        |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                        |         |         |                               |                               |
	| kubectl | -p                                                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:52:53 UTC | Tue, 29 Mar 2022 17:53:53 UTC |
	|         | multinode-20220329174520-564087                   |                                        |         |         |                               |                               |
	|         | -- exec                                           |                                        |         |         |                               |                               |
	|         | busybox-7978565885-bgzlj                          |                                        |         |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                        |         |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                        |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                        |         |         |                               |                               |
	| kubectl | -p                                                | multinode-20220329174520-564087        | jenkins | v1.25.2 | Tue, 29 Mar 2022 17:53:53 UTC | Tue, 29 Mar 2022 17:54:54 UTC |
	|         | multinode-20220329174520-564087                   |                                        |         |         |                               |                               |
	|         | -- exec                                           |                                        |         |         |                               |                               |
	|         | busybox-7978565885-cbpdd                          |                                        |         |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                        |         |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                        |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:45:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:45:20.331936  652427 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:45:20.332074  652427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:45:20.332084  652427 out.go:310] Setting ErrFile to fd 2...
	I0329 17:45:20.332089  652427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:45:20.332226  652427 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:45:20.332561  652427 out.go:304] Setting JSON to false
	I0329 17:45:20.333822  652427 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8874,"bootTime":1648567047,"procs":516,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:45:20.333890  652427 start.go:124] virtualization: kvm guest
	I0329 17:45:20.336395  652427 out.go:176] * [multinode-20220329174520-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:45:20.337789  652427 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:45:20.336537  652427 notify.go:193] Checking for updates...
	I0329 17:45:20.339121  652427 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:45:20.340388  652427 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:20.341724  652427 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:45:20.342960  652427 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:45:20.343220  652427 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:45:20.381804  652427 docker.go:137] docker version: linux-20.10.14
	I0329 17:45:20.381904  652427 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:45:20.469400  652427 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:45:20.4097232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:45:20.469499  652427 docker.go:254] overlay module found
	I0329 17:45:20.471464  652427 out.go:176] * Using the docker driver based on user configuration
	I0329 17:45:20.471504  652427 start.go:283] selected driver: docker
	I0329 17:45:20.471513  652427 start.go:800] validating driver "docker" against <nil>
	I0329 17:45:20.471540  652427 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 17:45:20.471590  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 17:45:20.471616  652427 out.go:241] ! Your cgroup does not allow setting memory.
	I0329 17:45:20.472872  652427 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 17:45:20.473545  652427 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:45:20.558853  652427 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:45:20.501447069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:45:20.558990  652427 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 17:45:20.559158  652427 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 17:45:20.559179  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:20.559184  652427 cni.go:154] 0 nodes found, recommending kindnet
	I0329 17:45:20.559195  652427 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 17:45:20.559210  652427 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 17:45:20.559220  652427 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0329 17:45:20.559234  652427 start_flags.go:306] config:
	{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:45:20.561307  652427 out.go:176] * Starting control plane node multinode-20220329174520-564087 in cluster multinode-20220329174520-564087
	I0329 17:45:20.561350  652427 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:45:20.562512  652427 out.go:176] * Pulling base image ...
	I0329 17:45:20.562536  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:20.562572  652427 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:45:20.562568  652427 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:45:20.562684  652427 cache.go:57] Caching tarball of preloaded images
	I0329 17:45:20.562956  652427 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 17:45:20.562980  652427 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:45:20.563381  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:45:20.563424  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json: {Name:mk2811d5590202c4e7e5921a7acc1152f1603641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:20.604209  652427 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:45:20.604239  652427 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:45:20.604251  652427 cache.go:208] Successfully downloaded all kic artifacts
	I0329 17:45:20.604324  652427 start.go:348] acquiring machines lock for multinode-20220329174520-564087: {Name:mk4375468e93ff31e49b583a42e4274bca560bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 17:45:20.604454  652427 start.go:352] acquired machines lock for "multinode-20220329174520-564087" in 108.527µs
	I0329 17:45:20.604484  652427 start.go:90] Provisioning new machine with config: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:45:20.604559  652427 start.go:127] createHost starting for "" (driver="docker")
	I0329 17:45:20.606589  652427 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0329 17:45:20.606818  652427 start.go:161] libmachine.API.Create for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:45:20.606851  652427 client.go:168] LocalClient.Create starting
	I0329 17:45:20.606933  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 17:45:20.606962  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:45:20.606982  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:45:20.607038  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 17:45:20.607054  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:45:20.607063  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:45:20.607383  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 17:45:20.639077  652427 cli_runner.go:180] docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 17:45:20.639148  652427 network_create.go:262] running [docker network inspect multinode-20220329174520-564087] to gather additional debugging logs...
	I0329 17:45:20.639168  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087
	W0329 17:45:20.668363  652427 cli_runner.go:180] docker network inspect multinode-20220329174520-564087 returned with exit code 1
	I0329 17:45:20.668421  652427 network_create.go:265] error running [docker network inspect multinode-20220329174520-564087]: docker network inspect multinode-20220329174520-564087: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220329174520-564087
	I0329 17:45:20.668446  652427 network_create.go:267] output of [docker network inspect multinode-20220329174520-564087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220329174520-564087
	
	** /stderr **
	I0329 17:45:20.668496  652427 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:45:20.698410  652427 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003066b8] misses:0}
	I0329 17:45:20.698471  652427 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 17:45:20.698488  652427 network_create.go:114] attempt to create docker network multinode-20220329174520-564087 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 17:45:20.698528  652427 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220329174520-564087
	I0329 17:45:20.760118  652427 network_create.go:98] docker network multinode-20220329174520-564087 192.168.49.0/24 created
	I0329 17:45:20.760156  652427 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20220329174520-564087" container
	I0329 17:45:20.760221  652427 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 17:45:20.790463  652427 cli_runner.go:133] Run: docker volume create multinode-20220329174520-564087 --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --label created_by.minikube.sigs.k8s.io=true
	I0329 17:45:20.821303  652427 oci.go:102] Successfully created a docker volume multinode-20220329174520-564087
	I0329 17:45:20.821379  652427 cli_runner.go:133] Run: docker run --rm --name multinode-20220329174520-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --entrypoint /usr/bin/test -v multinode-20220329174520-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 17:45:21.366294  652427 oci.go:106] Successfully prepared a docker volume multinode-20220329174520-564087
	I0329 17:45:21.366346  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:21.366368  652427 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 17:45:21.366440  652427 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 17:45:29.526467  652427 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.159935577s)
	I0329 17:45:29.526505  652427 kic.go:188] duration metric: took 8.160134 seconds to extract preloaded images to volume
	W0329 17:45:29.526547  652427 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 17:45:29.526557  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 17:45:29.526615  652427 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 17:45:29.614686  652427 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220329174520-564087 --name multinode-20220329174520-564087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220329174520-564087 --network multinode-20220329174520-564087 --ip 192.168.49.2 --volume multinode-20220329174520-564087:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 17:45:30.004820  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Running}}
	I0329 17:45:30.038532  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.070711  652427 cli_runner.go:133] Run: docker exec multinode-20220329174520-564087 stat /var/lib/dpkg/alternatives/iptables
	I0329 17:45:30.131557  652427 oci.go:278] the created container "multinode-20220329174520-564087" has a running status.
	I0329 17:45:30.131603  652427 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa...
	I0329 17:45:30.200220  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0329 17:45:30.200276  652427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 17:45:30.286384  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.324206  652427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 17:45:30.324234  652427 kic_runner.go:114] Args: [docker exec --privileged multinode-20220329174520-564087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 17:45:30.413834  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:30.447846  652427 machine.go:88] provisioning docker machine ...
	I0329 17:45:30.447885  652427 ubuntu.go:169] provisioning hostname "multinode-20220329174520-564087"
	I0329 17:45:30.447937  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.480443  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:30.480716  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:30.480748  652427 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20220329174520-564087 && echo "multinode-20220329174520-564087" | sudo tee /etc/hostname
	I0329 17:45:30.605009  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20220329174520-564087
	
	I0329 17:45:30.605124  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.636838  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:30.636980  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:30.637014  652427 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220329174520-564087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220329174520-564087/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220329174520-564087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 17:45:30.752777  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 17:45:30.752810  652427 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 17:45:30.752840  652427 ubuntu.go:177] setting up certificates
	I0329 17:45:30.752852  652427 provision.go:83] configureAuth start
	I0329 17:45:30.752909  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:30.783678  652427 provision.go:138] copyHostCerts
	I0329 17:45:30.783733  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:45:30.783769  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 17:45:30.783787  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:45:30.783857  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 17:45:30.783947  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:45:30.783981  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 17:45:30.783992  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:45:30.784030  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 17:45:30.784102  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:45:30.784128  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 17:45:30.784138  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:45:30.784171  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 17:45:30.784232  652427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.multinode-20220329174520-564087 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220329174520-564087]
	I0329 17:45:30.865210  652427 provision.go:172] copyRemoteCerts
	I0329 17:45:30.865293  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 17:45:30.865340  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:30.896127  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:30.980224  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0329 17:45:30.980325  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0329 17:45:30.997145  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0329 17:45:30.997224  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 17:45:31.014432  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0329 17:45:31.014495  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 17:45:31.031419  652427 provision.go:86] duration metric: configureAuth took 278.554389ms
	I0329 17:45:31.031445  652427 ubuntu.go:193] setting minikube options for container-runtime
	I0329 17:45:31.031603  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:45:31.031650  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.062634  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.062805  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.062825  652427 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 17:45:31.185413  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 17:45:31.185441  652427 ubuntu.go:71] root file system type: overlay
	I0329 17:45:31.185604  652427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 17:45:31.185665  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.217219  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.217380  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.217441  652427 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 17:45:31.341318  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 17:45:31.341404  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:31.372795  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:45:31.372939  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49514 <nil> <nil>}
	I0329 17:45:31.372957  652427 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 17:45:31.996301  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 17:45:31.334016942 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 17:45:31.996333  652427 machine.go:91] provisioned docker machine in 1.548462042s
	I0329 17:45:31.996344  652427 client.go:171] LocalClient.Create took 11.389481842s
	I0329 17:45:31.996356  652427 start.go:169] duration metric: libmachine.API.Create for "multinode-20220329174520-564087" took 11.389538465s
	I0329 17:45:31.996367  652427 start.go:302] post-start starting for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:45:31.996373  652427 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 17:45:31.996438  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 17:45:31.996488  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.029326  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.116473  652427 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 17:45:32.119015  652427 command_runner.go:130] > NAME="Ubuntu"
	I0329 17:45:32.119035  652427 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0329 17:45:32.119039  652427 command_runner.go:130] > ID=ubuntu
	I0329 17:45:32.119044  652427 command_runner.go:130] > ID_LIKE=debian
	I0329 17:45:32.119048  652427 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0329 17:45:32.119054  652427 command_runner.go:130] > VERSION_ID="20.04"
	I0329 17:45:32.119062  652427 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0329 17:45:32.119069  652427 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0329 17:45:32.119076  652427 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0329 17:45:32.119090  652427 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0329 17:45:32.119101  652427 command_runner.go:130] > VERSION_CODENAME=focal
	I0329 17:45:32.119106  652427 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0329 17:45:32.119200  652427 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 17:45:32.119221  652427 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 17:45:32.119229  652427 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 17:45:32.119235  652427 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 17:45:32.119247  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 17:45:32.119303  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 17:45:32.119365  652427 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 17:45:32.119377  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /etc/ssl/certs/5640872.pem
	I0329 17:45:32.119457  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 17:45:32.125790  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:45:32.142458  652427 start.go:305] post-start completed in 146.075505ms
	I0329 17:45:32.142815  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:32.173629  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:45:32.173868  652427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:45:32.173906  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.204947  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.289421  652427 command_runner.go:130] > 17%!
	(MISSING)I0329 17:45:32.289487  652427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 17:45:32.292925  652427 command_runner.go:130] > 242G
	I0329 17:45:32.293120  652427 start.go:130] duration metric: createHost completed in 11.688551212s
	I0329 17:45:32.293142  652427 start.go:81] releasing machines lock for "multinode-20220329174520-564087", held for 11.688671722s
	I0329 17:45:32.293224  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:45:32.323412  652427 ssh_runner.go:195] Run: systemctl --version
	I0329 17:45:32.323463  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.323519  652427 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 17:45:32.323576  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:32.354831  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.355356  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:32.579669  652427 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.15)
	I0329 17:45:32.579705  652427 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0329 17:45:32.579775  652427 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0329 17:45:32.579792  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 17:45:32.579798  652427 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0329 17:45:32.579807  652427 command_runner.go:130] > <H1>302 Moved</H1>
	I0329 17:45:32.579814  652427 command_runner.go:130] > The document has moved
	I0329 17:45:32.579825  652427 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0329 17:45:32.579832  652427 command_runner.go:130] > </BODY></HTML>
	I0329 17:45:32.588958  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:45:32.596881  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:45:32.596934  652427 command_runner.go:130] > [Unit]
	I0329 17:45:32.596945  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:45:32.596954  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:45:32.596961  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:45:32.596978  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:45:32.596994  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:45:32.597011  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:45:32.597014  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:45:32.597018  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:45:32.597024  652427 command_runner.go:130] > [Service]
	I0329 17:45:32.597028  652427 command_runner.go:130] > Type=notify
	I0329 17:45:32.597033  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:45:32.597043  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:45:32.597069  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:45:32.597085  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:45:32.597098  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:45:32.597111  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:45:32.597123  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:45:32.597138  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:45:32.597153  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:45:32.597166  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:45:32.597175  652427 command_runner.go:130] > ExecStart=
	I0329 17:45:32.597198  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:45:32.597209  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:45:32.597215  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:45:32.597227  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:45:32.597235  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:45:32.597238  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:45:32.597245  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:45:32.597250  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:45:32.597254  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:45:32.597261  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:45:32.597264  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:45:32.597275  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:45:32.597283  652427 command_runner.go:130] > Delegate=yes
	I0329 17:45:32.597288  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:45:32.597295  652427 command_runner.go:130] > KillMode=process
	I0329 17:45:32.597298  652427 command_runner.go:130] > [Install]
	I0329 17:45:32.597306  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:45:32.597716  652427 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 17:45:32.597774  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 17:45:32.606512  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 17:45:32.618200  652427 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0329 17:45:32.618223  652427 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0329 17:45:32.618272  652427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 17:45:32.692673  652427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 17:45:32.768369  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:45:32.776590  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:45:32.776718  652427 command_runner.go:130] > [Unit]
	I0329 17:45:32.776734  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:45:32.776743  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:45:32.776757  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:45:32.776770  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:45:32.776781  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:45:32.776790  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:45:32.776801  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:45:32.776811  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:45:32.776822  652427 command_runner.go:130] > [Service]
	I0329 17:45:32.776832  652427 command_runner.go:130] > Type=notify
	I0329 17:45:32.776838  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:45:32.776851  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:45:32.776866  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:45:32.776881  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:45:32.776895  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:45:32.776910  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:45:32.776924  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:45:32.776939  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:45:32.776954  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:45:32.776968  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:45:32.776983  652427 command_runner.go:130] > ExecStart=
	I0329 17:45:32.777006  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:45:32.777019  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:45:32.777034  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:45:32.777048  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:45:32.777068  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:45:32.777077  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:45:32.777088  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:45:32.777098  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:45:32.777110  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:45:32.777128  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:45:32.777138  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:45:32.777149  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:45:32.777159  652427 command_runner.go:130] > Delegate=yes
	I0329 17:45:32.777172  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:45:32.777182  652427 command_runner.go:130] > KillMode=process
	I0329 17:45:32.777189  652427 command_runner.go:130] > [Install]
	I0329 17:45:32.777204  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:45:32.777443  652427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 17:45:32.856398  652427 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 17:45:32.865426  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:45:32.901376  652427 command_runner.go:130] > 20.10.13
	I0329 17:45:32.903182  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:45:32.940289  652427 command_runner.go:130] > 20.10.13
	I0329 17:45:32.943790  652427 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 17:45:32.943866  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:45:32.973629  652427 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0329 17:45:32.976814  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:45:32.987651  652427 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0329 17:45:32.987730  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:45:32.987792  652427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:45:33.017331  652427 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.5
	I0329 17:45:33.017355  652427 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.5
	I0329 17:45:33.017360  652427 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.5
	I0329 17:45:33.017366  652427 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.5
	I0329 17:45:33.017370  652427 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0329 17:45:33.017374  652427 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0329 17:45:33.017378  652427 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0329 17:45:33.017382  652427 command_runner.go:130] > kubernetesui/dashboard:v2.3.1
	I0329 17:45:33.017387  652427 command_runner.go:130] > kubernetesui/metrics-scraper:v1.0.7
	I0329 17:45:33.017391  652427 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:33.019113  652427 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:45:33.019130  652427 docker.go:537] Images already preloaded, skipping extraction
	I0329 17:45:33.019178  652427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 17:45:33.048467  652427 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.5
	I0329 17:45:33.048496  652427 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.5
	I0329 17:45:33.048505  652427 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.5
	I0329 17:45:33.048512  652427 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.5
	I0329 17:45:33.048516  652427 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0329 17:45:33.048520  652427 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0329 17:45:33.048524  652427 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0329 17:45:33.048530  652427 command_runner.go:130] > kubernetesui/dashboard:v2.3.1
	I0329 17:45:33.048537  652427 command_runner.go:130] > kubernetesui/metrics-scraper:v1.0.7
	I0329 17:45:33.048544  652427 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:33.050283  652427 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 17:45:33.050327  652427 cache_images.go:84] Images are preloaded, skipping loading
	I0329 17:45:33.050376  652427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 17:45:33.129448  652427 command_runner.go:130] > cgroupfs
	I0329 17:45:33.131302  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:33.131320  652427 cni.go:154] 1 nodes found, recommending kindnet
	I0329 17:45:33.131338  652427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 17:45:33.131358  652427 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220329174520-564087 NodeName:multinode-20220329174520-564087 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 17:45:33.131518  652427 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220329174520-564087"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 17:45:33.131621  652427 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220329174520-564087 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 17:45:33.131687  652427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 17:45:33.138123  652427 command_runner.go:130] > kubeadm
	I0329 17:45:33.138147  652427 command_runner.go:130] > kubectl
	I0329 17:45:33.138153  652427 command_runner.go:130] > kubelet
	I0329 17:45:33.138737  652427 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 17:45:33.138796  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 17:45:33.145401  652427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (409 bytes)
	I0329 17:45:33.157419  652427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 17:45:33.169290  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0329 17:45:33.181295  652427 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 17:45:33.184089  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:45:33.193022  652427 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087 for IP: 192.168.49.2
	I0329 17:45:33.193182  652427 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 17:45:33.193220  652427 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 17:45:33.193271  652427 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key
	I0329 17:45:33.193286  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt with IP's: []
	I0329 17:45:33.382270  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt ...
	I0329 17:45:33.382311  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt: {Name:mkf2670e92ffcd5bb222a702ee708a8cd949c85e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.382527  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key ...
	I0329 17:45:33.382541  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.key: {Name:mk2469da8bb447b021f089c5151287d57e91b757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.382625  652427 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2
	I0329 17:45:33.382641  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 17:45:33.597217  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 ...
	I0329 17:45:33.597260  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2: {Name:mke3ca14fd6b4be816e504978b954612dd79105f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.597450  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2 ...
	I0329 17:45:33.597464  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2: {Name:mk0af0fd690c60c1f78cbf327f4dc9aa4e203738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.597541  652427 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt
	I0329 17:45:33.597602  652427 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key
	I0329 17:45:33.597645  652427 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key
	I0329 17:45:33.597658  652427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt with IP's: []
	I0329 17:45:33.735508  652427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt ...
	I0329 17:45:33.735548  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt: {Name:mka12619613fc7fd7a45c7c925d374dc9a1bcead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.735740  652427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key ...
	I0329 17:45:33.735754  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key: {Name:mkf885717686e79c361d511a026f2b3d78adc44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:33.735831  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0329 17:45:33.735850  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0329 17:45:33.735859  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0329 17:45:33.735874  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0329 17:45:33.735886  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0329 17:45:33.735903  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0329 17:45:33.735914  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0329 17:45:33.735923  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0329 17:45:33.735972  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 17:45:33.736011  652427 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 17:45:33.736023  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 17:45:33.736047  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 17:45:33.736071  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 17:45:33.736092  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 17:45:33.736133  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:45:33.736161  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.736178  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.736190  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem -> /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.736710  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 17:45:33.754152  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 17:45:33.770372  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 17:45:33.786810  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0329 17:45:33.803136  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 17:45:33.819669  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 17:45:33.835849  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 17:45:33.852099  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 17:45:33.868248  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 17:45:33.884501  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 17:45:33.900602  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 17:45:33.916860  652427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 17:45:33.928452  652427 ssh_runner.go:195] Run: openssl version
	I0329 17:45:33.933068  652427 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0329 17:45:33.933136  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 17:45:33.939892  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942755  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942817  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.942852  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 17:45:33.948331  652427 command_runner.go:130] > 51391683
	I0329 17:45:33.948531  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 17:45:33.955332  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 17:45:33.962163  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.964845  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.964965  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.965019  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 17:45:33.969377  652427 command_runner.go:130] > 3ec20f2e
	I0329 17:45:33.969573  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 17:45:33.976389  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 17:45:33.983150  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.985837  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.985981  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.986026  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:45:33.990398  652427 command_runner.go:130] > b5213941
	I0329 17:45:33.990595  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 17:45:33.997297  652427 kubeadm.go:391] StartCluster: {Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:45:33.997425  652427 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 17:45:34.027789  652427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 17:45:34.034875  652427 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0329 17:45:34.034901  652427 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0329 17:45:34.034907  652427 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0329 17:45:34.034969  652427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 17:45:34.041760  652427 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 17:45:34.041807  652427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 17:45:34.047603  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0329 17:45:34.047633  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0329 17:45:34.047644  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0329 17:45:34.047657  652427 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 17:45:34.048182  652427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 17:45:34.048224  652427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 17:45:34.264667  652427 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1021-gcp\n", err: exit status 1
	I0329 17:45:34.327290  652427 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0329 17:45:44.452040  652427 command_runner.go:130] > [init] Using Kubernetes version: v1.23.5
	I0329 17:45:44.452123  652427 command_runner.go:130] > [preflight] Running pre-flight checks
	I0329 17:45:44.452245  652427 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0329 17:45:44.452323  652427 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1021-gcp
	I0329 17:45:44.452385  652427 command_runner.go:130] > DOCKER_VERSION: 20.10.13
	I0329 17:45:44.452507  652427 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0329 17:45:44.452586  652427 command_runner.go:130] > OS: Linux
	I0329 17:45:44.452711  652427 command_runner.go:130] > CGROUPS_CPU: enabled
	I0329 17:45:44.452819  652427 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0329 17:45:44.452909  652427 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0329 17:45:44.453032  652427 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0329 17:45:44.453160  652427 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0329 17:45:44.453259  652427 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0329 17:45:44.453363  652427 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0329 17:45:44.453465  652427 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0329 17:45:44.453557  652427 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0329 17:45:44.453708  652427 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0329 17:45:44.453831  652427 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0329 17:45:44.455744  652427 out.go:203]   - Generating certificates and keys ...
	I0329 17:45:44.454116  652427 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0329 17:45:44.455845  652427 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0329 17:45:44.455942  652427 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0329 17:45:44.456068  652427 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0329 17:45:44.456141  652427 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0329 17:45:44.456222  652427 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0329 17:45:44.456286  652427 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0329 17:45:44.456353  652427 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0329 17:45:44.456509  652427 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20220329174520-564087] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0329 17:45:44.456576  652427 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0329 17:45:44.456763  652427 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20220329174520-564087] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0329 17:45:44.456843  652427 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0329 17:45:44.456916  652427 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0329 17:45:44.456979  652427 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0329 17:45:44.457079  652427 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0329 17:45:44.457154  652427 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0329 17:45:44.457231  652427 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0329 17:45:44.457306  652427 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0329 17:45:44.457377  652427 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0329 17:45:44.457541  652427 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0329 17:45:44.457643  652427 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0329 17:45:44.457692  652427 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0329 17:45:44.459377  652427 out.go:203]   - Booting up control plane ...
	I0329 17:45:44.457842  652427 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0329 17:45:44.459490  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0329 17:45:44.459598  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0329 17:45:44.459705  652427 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0329 17:45:44.459818  652427 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0329 17:45:44.460032  652427 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0329 17:45:44.460147  652427 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.002212 seconds
	I0329 17:45:44.460329  652427 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0329 17:45:44.460534  652427 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
	I0329 17:45:44.460903  652427 command_runner.go:130] > NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
	I0329 17:45:44.461013  652427 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0329 17:45:44.461374  652427 command_runner.go:130] > [mark-control-plane] Marking the node multinode-20220329174520-564087 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0329 17:45:44.462844  652427 out.go:203]   - Configuring RBAC rules ...
	I0329 17:45:44.461542  652427 command_runner.go:130] > [bootstrap-token] Using token: zawg7g.qjqz7bfgihqm7mmd
	I0329 17:45:44.463002  652427 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0329 17:45:44.463126  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0329 17:45:44.463313  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0329 17:45:44.463461  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0329 17:45:44.463594  652427 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0329 17:45:44.463701  652427 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0329 17:45:44.463824  652427 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0329 17:45:44.463880  652427 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0329 17:45:44.463949  652427 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0329 17:45:44.464031  652427 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0329 17:45:44.464129  652427 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0329 17:45:44.464167  652427 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0329 17:45:44.464243  652427 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0329 17:45:44.464306  652427 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0329 17:45:44.464378  652427 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0329 17:45:44.464417  652427 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0329 17:45:44.464461  652427 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0329 17:45:44.464552  652427 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0329 17:45:44.464634  652427 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0329 17:45:44.464729  652427 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0329 17:45:44.464794  652427 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0329 17:45:44.464873  652427 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token zawg7g.qjqz7bfgihqm7mmd \
	I0329 17:45:44.465009  652427 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 \
	I0329 17:45:44.465034  652427 command_runner.go:130] > 	--control-plane 
	I0329 17:45:44.465155  652427 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0329 17:45:44.465267  652427 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zawg7g.qjqz7bfgihqm7mmd \
	I0329 17:45:44.465435  652427 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 
	I0329 17:45:44.465474  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:45:44.465488  652427 cni.go:154] 1 nodes found, recommending kindnet
	I0329 17:45:44.467033  652427 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0329 17:45:44.467102  652427 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0329 17:45:44.471007  652427 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0329 17:45:44.471036  652427 command_runner.go:130] >   Size: 2675000   	Blocks: 5232       IO Block: 4096   regular file
	I0329 17:45:44.471048  652427 command_runner.go:130] > Device: 34h/52d	Inode: 8004372     Links: 1
	I0329 17:45:44.471061  652427 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0329 17:45:44.471073  652427 command_runner.go:130] > Access: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:45:44.471087  652427 command_runner.go:130] > Modify: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:45:44.471099  652427 command_runner.go:130] > Change: 2022-03-21 20:07:13.664642338 +0000
	I0329 17:45:44.471110  652427 command_runner.go:130] >  Birth: -
	I0329 17:45:44.471197  652427 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 17:45:44.471215  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0329 17:45:44.546893  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 17:45:45.559214  652427 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0329 17:45:45.562845  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0329 17:45:45.567610  652427 command_runner.go:130] > serviceaccount/kindnet created
	I0329 17:45:45.574049  652427 command_runner.go:130] > daemonset.apps/kindnet created
	I0329 17:45:45.578443  652427 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.031449182s)
	I0329 17:45:45.578512  652427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 17:45:45.578604  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.578610  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=multinode-20220329174520-564087 minikube.k8s.io/updated_at=2022_03_29T17_45_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.585792  652427 command_runner.go:130] > -16
	I0329 17:45:45.669737  652427 command_runner.go:130] > node/multinode-20220329174520-564087 labeled
	I0329 17:45:45.672228  652427 ops.go:34] apiserver oom_adj: -16
	I0329 17:45:45.672289  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0329 17:45:45.672350  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:45.722907  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:46.223213  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:46.275757  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:46.723249  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:46.777030  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:47.223533  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:47.273570  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:47.723481  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:47.776650  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:48.223209  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:48.274842  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:48.723451  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:48.775854  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:49.223378  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:49.275283  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:49.723945  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:49.776134  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:50.223785  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:50.276830  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:50.723653  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:50.775722  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:51.223217  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:51.272699  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:51.723118  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:51.775142  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:52.223700  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:52.273692  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:52.723656  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:52.775225  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:53.223881  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:53.275870  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:53.723437  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:53.775237  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:54.223887  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:54.275636  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:54.723242  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:54.772350  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:55.223322  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:55.275242  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:55.724148  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:55.775597  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:56.223200  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:56.272110  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:56.723138  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:56.772079  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:57.224104  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:57.273136  652427 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0329 17:45:57.724114  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 17:45:57.851740  652427 command_runner.go:130] > NAME      SECRETS   AGE
	I0329 17:45:57.851761  652427 command_runner.go:130] > default   1         0s
	I0329 17:45:57.854086  652427 kubeadm.go:1020] duration metric: took 12.275551592s to wait for elevateKubeSystemPrivileges.
	I0329 17:45:57.854117  652427 kubeadm.go:393] StartCluster complete in 23.856825989s
	I0329 17:45:57.854139  652427 settings.go:142] acquiring lock: {Name:mkf193dd78851319876bf7c47a47f525125a4fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:57.854233  652427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:57.854893  652427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mke8ff89e3fadc84c0cca24c5855d2fcb9124f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 17:45:57.855378  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:57.855634  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:57.856055  652427 cert_rotation.go:137] Starting client certificate rotation controller
	I0329 17:45:57.856285  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:57.856300  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:57.856309  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:57.863324  652427 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0329 17:45:57.863340  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:57.863346  652427 round_trippers.go:580]     Audit-Id: 4a78bb5e-c8d7-4219-9488-6844c3c299cc
	I0329 17:45:57.863350  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:57.863355  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:57.863359  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:57.863363  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:57.863367  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:57.863371  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:57 GMT
	I0329 17:45:57.863393  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"429","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:57.863740  652427 request.go:1181] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"429","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:57.863788  652427 round_trippers.go:463] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:57.863796  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:57.863802  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:57.863807  652427 round_trippers.go:473]     Content-Type: application/json
	I0329 17:45:57.867109  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:45:57.867130  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:57.867139  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:57.867146  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:57.867153  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:57.867160  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:57.867167  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:57 GMT
	I0329 17:45:57.867177  652427 round_trippers.go:580]     Audit-Id: 33c36a87-4234-4541-970d-61535f949807
	I0329 17:45:57.867184  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:57.867213  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"440","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:58.368059  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0329 17:45:58.368082  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.368090  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.370444  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.370466  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.370472  652427 round_trippers.go:580]     Content-Length: 291
	I0329 17:45:58.370477  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.370490  652427 round_trippers.go:580]     Audit-Id: 8c50bb59-405b-42cb-b9fb-76b703236f22
	I0329 17:45:58.370495  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.370499  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.370505  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.370509  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.370533  652427 request.go:1181] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc360980-9b7c-4a32-81d7-bbbb203ea418","resourceVersion":"450","creationTimestamp":"2022-03-29T17:45:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0329 17:45:58.370638  652427 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220329174520-564087" rescaled to 1
	I0329 17:45:58.370686  652427 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 17:45:58.372157  652427 out.go:176] * Verifying Kubernetes components...
	I0329 17:45:58.372204  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:45:58.370730  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 17:45:58.370757  652427 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 17:45:58.372298  652427 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220329174520-564087"
	I0329 17:45:58.370970  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:45:58.372319  652427 addons.go:65] Setting default-storageclass=true in profile "multinode-20220329174520-564087"
	I0329 17:45:58.372337  652427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220329174520-564087"
	I0329 17:45:58.372347  652427 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220329174520-564087"
	W0329 17:45:58.372371  652427 addons.go:165] addon storage-provisioner should already be in state true
	I0329 17:45:58.372417  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:45:58.372732  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.372884  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.413855  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:58.414073  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:58.414377  652427 round_trippers.go:463] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0329 17:45:58.414388  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.414395  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.417608  652427 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 17:45:58.417345  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.417694  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.417709  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.417715  652427 round_trippers.go:580]     Content-Length: 109
	I0329 17:45:58.417720  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.417725  652427 round_trippers.go:580]     Audit-Id: b83edd2b-9136-46a1-ae5b-afc9a082cba3
	I0329 17:45:58.417732  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.417737  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.417746  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.417794  652427 request.go:1181] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"451"},"items":[]}
	I0329 17:45:58.417736  652427 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:45:58.417840  652427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 17:45:58.417896  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:58.418116  652427 addons.go:153] Setting addon default-storageclass=true in "multinode-20220329174520-564087"
	W0329 17:45:58.418133  652427 addons.go:165] addon default-storageclass should already be in state true
	I0329 17:45:58.418168  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:45:58.418572  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:45:58.447231  652427 command_runner.go:130] > apiVersion: v1
	I0329 17:45:58.447256  652427 command_runner.go:130] > data:
	I0329 17:45:58.447263  652427 command_runner.go:130] >   Corefile: |
	I0329 17:45:58.447269  652427 command_runner.go:130] >     .:53 {
	I0329 17:45:58.447274  652427 command_runner.go:130] >         errors
	I0329 17:45:58.447281  652427 command_runner.go:130] >         health {
	I0329 17:45:58.447287  652427 command_runner.go:130] >            lameduck 5s
	I0329 17:45:58.447292  652427 command_runner.go:130] >         }
	I0329 17:45:58.447296  652427 command_runner.go:130] >         ready
	I0329 17:45:58.447305  652427 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0329 17:45:58.447311  652427 command_runner.go:130] >            pods insecure
	I0329 17:45:58.447318  652427 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0329 17:45:58.447327  652427 command_runner.go:130] >            ttl 30
	I0329 17:45:58.447332  652427 command_runner.go:130] >         }
	I0329 17:45:58.447338  652427 command_runner.go:130] >         prometheus :9153
	I0329 17:45:58.447346  652427 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0329 17:45:58.447353  652427 command_runner.go:130] >            max_concurrent 1000
	I0329 17:45:58.447358  652427 command_runner.go:130] >         }
	I0329 17:45:58.447364  652427 command_runner.go:130] >         cache 30
	I0329 17:45:58.447370  652427 command_runner.go:130] >         loop
	I0329 17:45:58.447375  652427 command_runner.go:130] >         reload
	I0329 17:45:58.447381  652427 command_runner.go:130] >         loadbalance
	I0329 17:45:58.447386  652427 command_runner.go:130] >     }
	I0329 17:45:58.447392  652427 command_runner.go:130] > kind: ConfigMap
	I0329 17:45:58.447397  652427 command_runner.go:130] > metadata:
	I0329 17:45:58.447408  652427 command_runner.go:130] >   creationTimestamp: "2022-03-29T17:45:44Z"
	I0329 17:45:58.447411  652427 command_runner.go:130] >   name: coredns
	I0329 17:45:58.447416  652427 command_runner.go:130] >   namespace: kube-system
	I0329 17:45:58.447428  652427 command_runner.go:130] >   resourceVersion: "269"
	I0329 17:45:58.447436  652427 command_runner.go:130] >   uid: 2dd730ee-e7f4-4bce-ba89-ef3784dbc9a2
	I0329 17:45:58.447602  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 17:45:58.447609  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:45:58.447893  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:45:58.448306  652427 node_ready.go:35] waiting up to 6m0s for node "multinode-20220329174520-564087" to be "Ready" ...
	I0329 17:45:58.448384  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:58.448392  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.448402  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.451169  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.451191  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.451198  652427 round_trippers.go:580]     Audit-Id: 33314548-468a-4e46-a188-62a2115de1b9
	I0329 17:45:58.451206  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.451213  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.451219  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.451226  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.451251  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.451376  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:58.458529  652427 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 17:45:58.458553  652427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 17:45:58.458612  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:45:58.461740  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:58.502451  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:45:58.665357  652427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 17:45:58.758795  652427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 17:45:58.767286  652427 command_runner.go:130] > configmap/coredns replaced
	I0329 17:45:58.772545  652427 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0329 17:45:58.952777  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:58.952804  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:58.952816  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:58.955597  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:58.955627  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:58.955636  652427 round_trippers.go:580]     Audit-Id: a6b5d5bb-636c-4573-a5b9-2e4ba88af60e
	I0329 17:45:58.955644  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:58.955651  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:58.955658  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:58.955665  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:58.955672  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:58 GMT
	I0329 17:45:58.955856  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:59.084986  652427 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0329 17:45:59.085018  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0329 17:45:59.085027  652427 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0329 17:45:59.085037  652427 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0329 17:45:59.085044  652427 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0329 17:45:59.085050  652427 command_runner.go:130] > pod/storage-provisioner created
	I0329 17:45:59.085199  652427 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0329 17:45:59.090601  652427 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 17:45:59.090625  652427 addons.go:417] enableAddons completed in 719.880447ms
	I0329 17:45:59.452927  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:59.452952  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:59.452959  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:59.455471  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:59.455497  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:59.455504  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:59.455509  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:59.455513  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:59 GMT
	I0329 17:45:59.455518  652427 round_trippers.go:580]     Audit-Id: ce060bab-7da1-4d39-ba04-82b886cafee7
	I0329 17:45:59.455522  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:59.455527  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:59.455632  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:45:59.952138  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:45:59.952166  652427 round_trippers.go:469] Request Headers:
	I0329 17:45:59.952173  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:45:59.954614  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:45:59.954634  652427 round_trippers.go:577] Response Headers:
	I0329 17:45:59.954640  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:45:59.954645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:45:59.954649  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:45:59 GMT
	I0329 17:45:59.954654  652427 round_trippers.go:580]     Audit-Id: 81888850-00d1-46f1-b210-087522f93e40
	I0329 17:45:59.954658  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:45:59.954663  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:45:59.954790  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:00.452787  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:00.452812  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:00.452820  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:00.455144  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:00.455165  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:00.455171  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:00.455175  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:00 GMT
	I0329 17:46:00.455179  652427 round_trippers.go:580]     Audit-Id: d95452df-5ec6-4caa-94f6-5af73479c326
	I0329 17:46:00.455183  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:00.455188  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:00.455200  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:00.455307  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:00.455624  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:00.952881  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:00.952914  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:00.952924  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:00.955403  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:00.955428  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:00.955438  652427 round_trippers.go:580]     Audit-Id: 4c9d15b9-7b0d-4a33-ac21-04c9710aa4df
	I0329 17:46:00.955444  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:00.955451  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:00.955459  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:00.955467  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:00.955477  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:00 GMT
	I0329 17:46:00.955584  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:01.453213  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:01.453240  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:01.453249  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:01.455468  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:01.455495  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:01.455504  652427 round_trippers.go:580]     Audit-Id: 82ab7820-eeb8-48bc-8425-1445ed6b5712
	I0329 17:46:01.455510  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:01.455517  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:01.455524  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:01.455532  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:01.455549  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:01 GMT
	I0329 17:46:01.455671  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:01.952213  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:01.952237  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:01.952244  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:01.954935  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:01.954963  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:01.954972  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:01.954980  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:01.954986  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:01.954993  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:01 GMT
	I0329 17:46:01.954999  652427 round_trippers.go:580]     Audit-Id: 2e10425c-be6f-415f-9f21-fe2a7ca71ff8
	I0329 17:46:01.955010  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:01.955131  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:02.452430  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:02.452460  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:02.452471  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:02.455326  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:02.455355  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:02.455365  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:02.455372  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:02.455380  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:02.455386  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:02.455393  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:02 GMT
	I0329 17:46:02.455400  652427 round_trippers.go:580]     Audit-Id: c1155a0c-671b-4e47-8ab8-7c7e4ccf418f
	I0329 17:46:02.455515  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:02.455838  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:02.953137  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:02.953164  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:02.953175  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:02.956173  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:02.956200  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:02.956208  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:02.956215  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:02.956221  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:02.956229  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:02 GMT
	I0329 17:46:02.956236  652427 round_trippers.go:580]     Audit-Id: 7b5d9e95-b9d3-40fa-ab1e-0a2ec2f61e50
	I0329 17:46:02.956248  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:02.956348  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:03.452986  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:03.453019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:03.453030  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:03.455902  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:03.455934  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:03.455944  652427 round_trippers.go:580]     Audit-Id: 748c3a93-49d5-4013-a11d-5c4bf933916e
	I0329 17:46:03.455950  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:03.455956  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:03.455963  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:03.455970  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:03.455977  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:03 GMT
	I0329 17:46:03.456159  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:03.952792  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:03.952821  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:03.952832  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:03.955474  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:03.955499  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:03.955508  652427 round_trippers.go:580]     Audit-Id: fc837eb8-3bbe-450f-ad18-43a3ff20b991
	I0329 17:46:03.955515  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:03.955522  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:03.955529  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:03.955536  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:03.955542  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:03 GMT
	I0329 17:46:03.955704  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.452242  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:04.452269  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:04.452276  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:04.454763  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:04.454786  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:04.454793  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:04.454805  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:04.454812  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:04 GMT
	I0329 17:46:04.454821  652427 round_trippers.go:580]     Audit-Id: a2e1a4e9-c534-4fb7-a236-7d5b85716e5c
	I0329 17:46:04.454827  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:04.454834  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:04.454937  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.952472  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:04.952494  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:04.952502  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:04.954824  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:04.954852  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:04.954861  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:04.954867  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:04.954871  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:04.954876  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:04.954880  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:04 GMT
	I0329 17:46:04.954884  652427 round_trippers.go:580]     Audit-Id: 0aae4334-7a50-4066-9046-f853c02d5529
	I0329 17:46:04.954978  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:04.955309  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:05.452671  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:05.452694  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:05.452702  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:05.455118  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:05.455149  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:05.455159  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:05.455167  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:05.455174  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:05.455180  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:05.455187  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:05 GMT
	I0329 17:46:05.455195  652427 round_trippers.go:580]     Audit-Id: 67d61b81-ef74-4e42-b552-65ab957ea414
	I0329 17:46:05.455378  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:05.952973  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:05.953003  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:05.953014  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:05.955542  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:05.955581  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:05.955591  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:05.955598  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:05.955605  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:05.955613  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:05 GMT
	I0329 17:46:05.955620  652427 round_trippers.go:580]     Audit-Id: 04cfcdeb-84b8-4468-93bb-26cefb531c13
	I0329 17:46:05.955626  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:05.955726  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.452288  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:06.452313  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:06.452321  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:06.455010  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:06.455033  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:06.455041  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:06.455045  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:06.455050  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:06 GMT
	I0329 17:46:06.455056  652427 round_trippers.go:580]     Audit-Id: 9501d640-39b6-4f90-bd20-49156ec50fcc
	I0329 17:46:06.455062  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:06.455067  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:06.455198  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.952729  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:06.952759  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:06.952769  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:06.955312  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:06.955339  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:06.955348  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:06.955356  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:06 GMT
	I0329 17:46:06.955361  652427 round_trippers.go:580]     Audit-Id: 3d2244d6-6978-4659-9238-5f084a30b6cc
	I0329 17:46:06.955365  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:06.955370  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:06.955374  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:06.955466  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:06.955784  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:07.453069  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:07.453095  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:07.453104  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:07.455540  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:07.455564  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:07.455570  652427 round_trippers.go:580]     Audit-Id: 83b19bf4-0b6a-494a-8cf6-413f37d42a03
	I0329 17:46:07.455574  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:07.455579  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:07.455583  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:07.455587  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:07.455592  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:07 GMT
	I0329 17:46:07.455669  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:07.952425  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:07.952452  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:07.952464  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:07.955002  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:07.955033  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:07.955043  652427 round_trippers.go:580]     Audit-Id: f4a6adb1-e087-4261-8db6-c8584dc0aae9
	I0329 17:46:07.955051  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:07.955058  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:07.955071  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:07.955078  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:07.955084  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:07 GMT
	I0329 17:46:07.955193  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.452855  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:08.452884  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:08.452894  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:08.455304  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:08.455327  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:08.455333  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:08 GMT
	I0329 17:46:08.455338  652427 round_trippers.go:580]     Audit-Id: e0f83135-de42-40e5-9849-3a40561c2f49
	I0329 17:46:08.455343  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:08.455347  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:08.455352  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:08.455358  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:08.455460  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.953100  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:08.953123  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:08.953130  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:08.955628  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:08.955657  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:08.955667  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:08.955674  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:08.955681  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:08 GMT
	I0329 17:46:08.955688  652427 round_trippers.go:580]     Audit-Id: 4487def9-5d98-49a8-970e-af577ace467b
	I0329 17:46:08.955741  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:08.955761  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:08.955884  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:08.956297  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:09.452316  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:09.452339  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:09.452349  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:09.454784  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:09.454809  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:09.454818  652427 round_trippers.go:580]     Audit-Id: b6d3dd39-ed47-4871-ad4b-9d81c0f2fa2f
	I0329 17:46:09.454825  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:09.454832  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:09.454838  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:09.454846  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:09.454857  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:09 GMT
	I0329 17:46:09.455000  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:09.952623  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:09.952648  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:09.952655  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:09.955221  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:09.955244  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:09.955250  652427 round_trippers.go:580]     Audit-Id: 99f251b9-28d8-448f-b5b2-23d4f695c5b8
	I0329 17:46:09.955255  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:09.955259  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:09.955266  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:09.955274  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:09.955281  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:09 GMT
	I0329 17:46:09.955444  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:10.452175  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:10.452201  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:10.452208  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:10.454680  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:10.454704  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:10.454712  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:10.454720  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:10.454726  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:10.454733  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:10 GMT
	I0329 17:46:10.454739  652427 round_trippers.go:580]     Audit-Id: b0608705-4c9c-43af-a092-a70d79dcb548
	I0329 17:46:10.454745  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:10.454856  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:10.952220  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:10.952246  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:10.952253  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:10.954635  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:10.954659  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:10.954666  652427 round_trippers.go:580]     Audit-Id: 07ca974c-f0bd-4f52-b11f-6bb2b40add9a
	I0329 17:46:10.954671  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:10.954676  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:10.954680  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:10.954685  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:10.954695  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:10 GMT
	I0329 17:46:10.954831  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:11.452367  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:11.452392  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:11.452399  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:11.454620  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:11.454646  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:11.454655  652427 round_trippers.go:580]     Audit-Id: df0c9269-cb59-46cc-b8df-e33dc0c2b20a
	I0329 17:46:11.454662  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:11.454669  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:11.454676  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:11.454687  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:11.454695  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:11 GMT
	I0329 17:46:11.454825  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:11.455175  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:11.952355  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:11.952383  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:11.952391  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:11.954806  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:11.954829  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:11.954836  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:11.954841  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:11.954845  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:11.954850  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:11 GMT
	I0329 17:46:11.954854  652427 round_trippers.go:580]     Audit-Id: 5f6403f9-7b6f-473a-9ac8-3a57f700e0ec
	I0329 17:46:11.954859  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:11.954963  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:12.452474  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:12.452500  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:12.452507  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:12.455006  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:12.455034  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:12.455041  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:12.455045  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:12.455050  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:12.455054  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:12 GMT
	I0329 17:46:12.455059  652427 round_trippers.go:580]     Audit-Id: 02cd31b4-4450-4d0b-b716-edb25ec4b28e
	I0329 17:46:12.455063  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:12.455184  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:12.952721  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:12.952746  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:12.952760  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:12.955005  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:12.955031  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:12.955040  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:12.955046  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:12.955053  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:12.955061  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:12 GMT
	I0329 17:46:12.955071  652427 round_trippers.go:580]     Audit-Id: c2ca7a5a-f0f7-4e96-88c1-7f978288d21f
	I0329 17:46:12.955078  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:12.955167  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:13.452883  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:13.452912  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:13.452920  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:13.455238  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:13.455257  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:13.455264  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:13.455269  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:13.455275  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:13 GMT
	I0329 17:46:13.455288  652427 round_trippers.go:580]     Audit-Id: 2a8257d7-328e-44e6-8e1c-62bee7baee55
	I0329 17:46:13.455300  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:13.455311  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:13.455463  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:13.455809  652427 node_ready.go:58] node "multinode-20220329174520-564087" has status "Ready":"False"
	I0329 17:46:13.952994  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:13.953022  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:13.953031  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:13.955520  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:13.955541  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:13.955547  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:13.955552  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:13.955556  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:13 GMT
	I0329 17:46:13.955560  652427 round_trippers.go:580]     Audit-Id: b9262a57-33b2-4301-80ff-1c60c4a98a8f
	I0329 17:46:13.955564  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:13.955569  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:13.955677  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:14.452222  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:14.452249  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:14.452260  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:14.454563  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:14.454584  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:14.454590  652427 round_trippers.go:580]     Audit-Id: e514765c-1c3e-49e7-822b-6e829d5dd74d
	I0329 17:46:14.454595  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:14.454599  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:14.454605  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:14.454612  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:14.454619  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:14 GMT
	I0329 17:46:14.454735  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:14.952276  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:14.952302  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:14.952310  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:14.954724  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:14.954753  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:14.954762  652427 round_trippers.go:580]     Audit-Id: 9ed1258a-72c2-49e2-a8e3-88d139b7fade
	I0329 17:46:14.954769  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:14.954777  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:14.954784  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:14.954792  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:14.954799  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:14 GMT
	I0329 17:46:14.954914  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"396","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5211 chars]
	I0329 17:46:15.452614  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.452638  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.452646  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.454814  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.454839  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.454848  652427 round_trippers.go:580]     Audit-Id: 5a116d81-f7db-49bd-8e1a-1a2dfaefc3d6
	I0329 17:46:15.454856  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.454863  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.454870  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.454876  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.454881  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.455000  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:15.455397  652427 node_ready.go:49] node "multinode-20220329174520-564087" has status "Ready":"True"
	I0329 17:46:15.455423  652427 node_ready.go:38] duration metric: took 17.007097337s waiting for node "multinode-20220329174520-564087" to be "Ready" ...
	I0329 17:46:15.455435  652427 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:15.455554  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:15.455567  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.455583  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.458597  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.458625  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.458635  652427 round_trippers.go:580]     Audit-Id: f859ad80-6fa5-406f-a568-7c98a009b2aa
	I0329 17:46:15.458642  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.458651  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.458658  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.458666  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.458673  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.459036  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"487"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55643 chars]
	I0329 17:46:15.463172  652427 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:15.463253  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:15.463265  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.463275  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.465191  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:15.465210  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.465216  652427 round_trippers.go:580]     Audit-Id: ff77fb91-6575-476d-b708-86cdd6f9baed
	I0329 17:46:15.465221  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.465228  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.465235  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.465246  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.465258  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.465383  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:15.465769  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.465783  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.465790  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.467422  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:15.467440  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.467448  652427 round_trippers.go:580]     Audit-Id: e48cb016-ce23-4af0-99fc-3131ccc8894a
	I0329 17:46:15.467456  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.467463  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.467470  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.467479  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.467484  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.467607  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:15.968339  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:15.968372  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.968384  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.973042  652427 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0329 17:46:15.973089  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.973098  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.973106  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.973112  652427 round_trippers.go:580]     Audit-Id: c342f6e6-506a-4199-919a-49cdac91c576
	I0329 17:46:15.973118  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.973124  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.973132  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.973279  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:15.973753  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:15.973770  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:15.973777  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:15.975817  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:15.975839  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:15.975847  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:15.975854  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:15.975860  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:15.975867  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:15.975873  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:15 GMT
	I0329 17:46:15.975880  652427 round_trippers.go:580]     Audit-Id: c757f294-067c-4ba6-8808-5df87aebe383
	I0329 17:46:15.976021  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:16.468632  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:16.468660  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.468673  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.471280  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:16.471310  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.471319  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.471327  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.471334  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.471341  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.471347  652427 round_trippers.go:580]     Audit-Id: 352eff47-644f-4cca-ba1a-20e8c50f3b7a
	I0329 17:46:16.471353  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.471492  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:16.472121  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:16.472142  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.472152  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.473948  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:16.473974  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.473983  652427 round_trippers.go:580]     Audit-Id: 12cc25f2-0339-4156-82ff-275c5129d565
	I0329 17:46:16.473990  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.473997  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.474004  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.474013  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.474023  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.474122  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:16.968749  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:16.968782  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.968792  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.971866  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:46:16.971896  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.971908  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.971916  652427 round_trippers.go:580]     Audit-Id: f81cc77b-406c-429a-8221-e23406850077
	I0329 17:46:16.971929  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.971942  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.971954  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.971961  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.972148  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"487","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5859 chars]
	I0329 17:46:16.972625  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:16.972642  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:16.972652  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:16.974550  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:16.974574  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:16.974582  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:16.974589  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:16 GMT
	I0329 17:46:16.974595  652427 round_trippers.go:580]     Audit-Id: eb5bb099-a121-42ca-b11a-c7fe8f8e6b6e
	I0329 17:46:16.974603  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:16.974614  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:16.974622  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:16.974740  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.468304  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:17.468337  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.468345  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.470738  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.470760  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.470766  652427 round_trippers.go:580]     Audit-Id: 27ec085f-1c7f-45ac-b700-ca0ffcca1900
	I0329 17:46:17.470770  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.470776  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.470782  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.470789  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.470796  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.470916  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5986 chars]
	I0329 17:46:17.471369  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.471385  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.471392  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.473208  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.473229  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.473235  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.473240  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.473244  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.473248  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.473252  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.473256  652427 round_trippers.go:580]     Audit-Id: 7293679e-eb81-43b3-a157-58d13349713a
	I0329 17:46:17.473373  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.473702  652427 pod_ready.go:92] pod "coredns-64897985d-6tcql" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.473738  652427 pod_ready.go:81] duration metric: took 2.010541516s waiting for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.473751  652427 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.473794  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220329174520-564087
	I0329 17:46:17.473802  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.473808  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.475663  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.475684  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.475689  652427 round_trippers.go:580]     Audit-Id: 29c9a7bf-9c5d-4845-8114-9ba425c02b3e
	I0329 17:46:17.475694  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.475698  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.475703  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.475708  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.475714  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.475822  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220329174520-564087","namespace":"kube-system","uid":"ac5cd989-3ac7-4d02-94c0-0c2843391dfe","resourceVersion":"433","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.mirror":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.seen":"2022-03-29T17:45:44.427846936Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"Fiel
dsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube [truncated 5818 chars]
	I0329 17:46:17.476198  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.476212  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.476218  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.477874  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.477898  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.477908  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.477915  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.477930  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.477942  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.477954  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.477965  652427 round_trippers.go:580]     Audit-Id: 1e666c0e-abe8-4fed-b5ca-c2fa024d5633
	I0329 17:46:17.478041  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.478378  652427 pod_ready.go:92] pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.478410  652427 pod_ready.go:81] duration metric: took 4.638363ms waiting for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.478433  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.478498  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220329174520-564087
	I0329 17:46:17.478513  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.478522  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.480244  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.480261  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.480269  652427 round_trippers.go:580]     Audit-Id: 056c8546-3cb7-4ce3-982c-f2fa9315662f
	I0329 17:46:17.480276  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.480283  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.480293  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.480310  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.480317  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.480452  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220329174520-564087","namespace":"kube-system","uid":"112c5d83-654f-4235-9e38-a435d3f2d433","resourceVersion":"363","creationTimestamp":"2022-03-29T17:45:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.mirror":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.seen":"2022-03-29T17:45:37.376573317Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17
:45:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 8327 chars]
	I0329 17:46:17.480880  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.480895  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.480900  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.482414  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.482442  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.482449  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.482454  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.482461  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.482468  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.482485  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.482492  652427 round_trippers.go:580]     Audit-Id: 0c8430bd-4516-4580-90d7-85b9217ea3c8
	I0329 17:46:17.482631  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.482944  652427 pod_ready.go:92] pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.482959  652427 pod_ready.go:81] duration metric: took 4.512997ms waiting for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.482968  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.483011  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220329174520-564087
	I0329 17:46:17.483019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.483024  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.484605  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.484621  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.484626  652427 round_trippers.go:580]     Audit-Id: 5d0d73a4-b0b0-4217-8288-b31bd4698a5d
	I0329 17:46:17.484631  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.484636  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.484640  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.484645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.484651  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.484740  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220329174520-564087","namespace":"kube-system","uid":"66589d1b-e363-4195-bbc3-4ff12b3bf3cf","resourceVersion":"370","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.mirror":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.seen":"2022-03-29T17:45:44.427888140Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 7902 chars]
	I0329 17:46:17.485143  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.485157  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.485163  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.486766  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.486785  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.486792  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.486799  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.486806  652427 round_trippers.go:580]     Audit-Id: 9f0c79d0-9283-473d-bd38-8296174b048f
	I0329 17:46:17.486818  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.486829  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.486839  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.486918  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.487213  652427 pod_ready.go:92] pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.487228  652427 pod_ready.go:81] duration metric: took 4.252998ms waiting for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.487239  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.487285  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:17.487295  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.487305  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.488877  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.488892  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.488899  652427 round_trippers.go:580]     Audit-Id: c94e0991-3439-44f6-b9a8-d55c72d17235
	I0329 17:46:17.488905  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.488912  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.488919  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.488926  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.488937  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.489029  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29kjv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ca1dbe90-6525-4660-81a7-68b2c47378da","resourceVersion":"468","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5551 chars]
	I0329 17:46:17.489390  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.489404  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.489410  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.490733  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:17.490746  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.490753  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.490760  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.490777  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.490788  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.490802  652427 round_trippers.go:580]     Audit-Id: c0f5c346-4149-48a0-a943-8debe20d6493
	I0329 17:46:17.490809  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.490935  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.491340  652427 pod_ready.go:92] pod "kube-proxy-29kjv" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.491354  652427 pod_ready.go:81] duration metric: took 4.107823ms waiting for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.491366  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.668754  652427 request.go:597] Waited for 177.326154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:17.668816  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:17.668828  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.668836  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.671489  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.671513  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.671520  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.671524  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.671529  652427 round_trippers.go:580]     Audit-Id: 5d2e3dce-cddf-439f-b1fe-b58b2bdb851f
	I0329 17:46:17.671533  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.671537  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.671542  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.671684  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220329174520-564087","namespace":"kube-system","uid":"4ba1ded4-06a3-44a7-922f-b02863ff0da0","resourceVersion":"369","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.mirror":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.seen":"2022-03-29T17:45:44.427890891Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:
kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kub [truncated 4784 chars]
	I0329 17:46:17.869132  652427 request.go:597] Waited for 197.047985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.869193  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:17.869198  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.869205  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.871752  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:17.871773  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.871781  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.871787  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.871793  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.871800  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.871806  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.871817  652427 round_trippers.go:580]     Audit-Id: 39e9f7b8-d734-453f-988c-6c376df9d045
	I0329 17:46:17.871947  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:17.872273  652427 pod_ready.go:92] pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:17.872293  652427 pod_ready.go:81] duration metric: took 380.916917ms waiting for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:17.872308  652427 pod_ready.go:38] duration metric: took 2.416830572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:17.872343  652427 api_server.go:51] waiting for apiserver process to appear ...
	I0329 17:46:17.872393  652427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 17:46:17.881985  652427 command_runner.go:130] > 1711
	I0329 17:46:17.882022  652427 api_server.go:71] duration metric: took 19.511312052s to wait for apiserver process to appear ...
	I0329 17:46:17.882032  652427 api_server.go:87] waiting for apiserver healthz status ...
	I0329 17:46:17.882043  652427 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0329 17:46:17.886427  652427 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0329 17:46:17.886481  652427 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0329 17:46:17.886486  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:17.886493  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:17.887114  652427 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0329 17:46:17.887132  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:17.887140  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:17.887148  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:17.887155  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:17.887162  652427 round_trippers.go:580]     Content-Length: 263
	I0329 17:46:17.887177  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:17 GMT
	I0329 17:46:17.887181  652427 round_trippers.go:580]     Audit-Id: 0b16d397-f192-4f1f-95ac-dbcfea22cc5e
	I0329 17:46:17.887186  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:17.887205  652427 request.go:1181] Response Body: {
	  "major": "1",
	  "minor": "23",
	  "gitVersion": "v1.23.5",
	  "gitCommit": "c285e781331a3785a7f436042c65c5641ce8a9e9",
	  "gitTreeState": "clean",
	  "buildDate": "2022-03-16T15:52:18Z",
	  "goVersion": "go1.17.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0329 17:46:17.887283  652427 api_server.go:140] control plane version: v1.23.5
	I0329 17:46:17.887297  652427 api_server.go:130] duration metric: took 5.260463ms to wait for apiserver health ...
	I0329 17:46:17.887303  652427 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 17:46:18.068636  652427 request.go:597] Waited for 181.269107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.068718  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.068746  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.068754  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.071771  652427 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0329 17:46:18.071791  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.071797  652427 round_trippers.go:580]     Audit-Id: 095e2ec8-e6b2-4c55-9f4a-4e554a84c0a4
	I0329 17:46:18.071801  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.071806  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.071810  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.071815  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.071819  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.072360  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55754 chars]
	I0329 17:46:18.074140  652427 system_pods.go:59] 8 kube-system pods found
	I0329 17:46:18.074181  652427 system_pods.go:61] "coredns-64897985d-6tcql" [a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2] Running
	I0329 17:46:18.074193  652427 system_pods.go:61] "etcd-multinode-20220329174520-564087" [ac5cd989-3ac7-4d02-94c0-0c2843391dfe] Running
	I0329 17:46:18.074201  652427 system_pods.go:61] "kindnet-7hm65" [8d9c821d-cc40-4073-95ab-b810b61210a7] Running
	I0329 17:46:18.074209  652427 system_pods.go:61] "kube-apiserver-multinode-20220329174520-564087" [112c5d83-654f-4235-9e38-a435d3f2d433] Running
	I0329 17:46:18.074220  652427 system_pods.go:61] "kube-controller-manager-multinode-20220329174520-564087" [66589d1b-e363-4195-bbc3-4ff12b3bf3cf] Running
	I0329 17:46:18.074238  652427 system_pods.go:61] "kube-proxy-29kjv" [ca1dbe90-6525-4660-81a7-68b2c47378da] Running
	I0329 17:46:18.074245  652427 system_pods.go:61] "kube-scheduler-multinode-20220329174520-564087" [4ba1ded4-06a3-44a7-922f-b02863ff0da0] Running
	I0329 17:46:18.074251  652427 system_pods.go:61] "storage-provisioner" [7d9d3f42-beb4-4d9d-82ac-3984ac52c132] Running
	I0329 17:46:18.074263  652427 system_pods.go:74] duration metric: took 186.954024ms to wait for pod list to return data ...
	I0329 17:46:18.074277  652427 default_sa.go:34] waiting for default service account to be created ...
	I0329 17:46:18.268750  652427 request.go:597] Waited for 194.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0329 17:46:18.268819  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0329 17:46:18.268826  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.268848  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.271155  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:18.271177  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.271188  652427 round_trippers.go:580]     Content-Length: 304
	I0329 17:46:18.271192  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.271197  652427 round_trippers.go:580]     Audit-Id: cbcb0903-9e90-4114-85e1-fa7a6c1f515f
	I0329 17:46:18.271204  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.271211  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.271222  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.271229  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.271258  652427 request.go:1181] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"89609a58-81bb-4b4a-bdd9-152993280465","resourceVersion":"384","creationTimestamp":"2022-03-29T17:45:57Z"},"secrets":[{"name":"default-token-vh5wm"}]}]}
	I0329 17:46:18.271467  652427 default_sa.go:45] found service account: "default"
	I0329 17:46:18.271484  652427 default_sa.go:55] duration metric: took 197.197498ms for default service account to be created ...
	I0329 17:46:18.271491  652427 system_pods.go:116] waiting for k8s-apps to be running ...
	I0329 17:46:18.468930  652427 request.go:597] Waited for 197.338154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.468982  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:18.468987  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.468994  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.473968  652427 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0329 17:46:18.473999  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.474018  652427 round_trippers.go:580]     Audit-Id: ead69c26-9a5d-4812-a5d3-7fc3e0e396e4
	I0329 17:46:18.474025  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.474032  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.474038  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.474052  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.474059  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.475192  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55754 chars]
	I0329 17:46:18.477578  652427 system_pods.go:86] 8 kube-system pods found
	I0329 17:46:18.477613  652427 system_pods.go:89] "coredns-64897985d-6tcql" [a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2] Running
	I0329 17:46:18.477621  652427 system_pods.go:89] "etcd-multinode-20220329174520-564087" [ac5cd989-3ac7-4d02-94c0-0c2843391dfe] Running
	I0329 17:46:18.477627  652427 system_pods.go:89] "kindnet-7hm65" [8d9c821d-cc40-4073-95ab-b810b61210a7] Running
	I0329 17:46:18.477633  652427 system_pods.go:89] "kube-apiserver-multinode-20220329174520-564087" [112c5d83-654f-4235-9e38-a435d3f2d433] Running
	I0329 17:46:18.477640  652427 system_pods.go:89] "kube-controller-manager-multinode-20220329174520-564087" [66589d1b-e363-4195-bbc3-4ff12b3bf3cf] Running
	I0329 17:46:18.477652  652427 system_pods.go:89] "kube-proxy-29kjv" [ca1dbe90-6525-4660-81a7-68b2c47378da] Running
	I0329 17:46:18.477657  652427 system_pods.go:89] "kube-scheduler-multinode-20220329174520-564087" [4ba1ded4-06a3-44a7-922f-b02863ff0da0] Running
	I0329 17:46:18.477670  652427 system_pods.go:89] "storage-provisioner" [7d9d3f42-beb4-4d9d-82ac-3984ac52c132] Running
	I0329 17:46:18.477678  652427 system_pods.go:126] duration metric: took 206.181639ms to wait for k8s-apps to be running ...
	I0329 17:46:18.477692  652427 system_svc.go:44] waiting for kubelet service to be running ....
	I0329 17:46:18.477740  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:18.491034  652427 system_svc.go:56] duration metric: took 13.333193ms WaitForService to wait for kubelet.
	I0329 17:46:18.491059  652427 kubeadm.go:548] duration metric: took 20.120349922s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0329 17:46:18.491078  652427 node_conditions.go:102] verifying NodePressure condition ...
	I0329 17:46:18.668395  652427 request.go:597] Waited for 177.238607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:18.668474  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:18.668490  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:18.668498  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:18.671002  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:18.671034  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:18.671040  652427 round_trippers.go:580]     Audit-Id: e64adfe9-d069-446e-bc45-1bc0796f0b85
	I0329 17:46:18.671045  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:18.671049  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:18.671055  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:18.671065  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:18.671081  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:18 GMT
	I0329 17:46:18.671196  652427 request.go:1181] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","vol
umes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi [truncated 5296 chars]
	I0329 17:46:18.671557  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:18.671577  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:18.671589  652427 node_conditions.go:105] duration metric: took 180.507051ms to run NodePressure ...
	I0329 17:46:18.671604  652427 start.go:213] waiting for startup goroutines ...
	I0329 17:46:18.673932  652427 out.go:176] 
	I0329 17:46:18.674128  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:18.674206  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:18.676080  652427 out.go:176] * Starting worker node multinode-20220329174520-564087-m02 in cluster multinode-20220329174520-564087
	I0329 17:46:18.676110  652427 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:46:18.677788  652427 out.go:176] * Pulling base image ...
	I0329 17:46:18.677823  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:46:18.677844  652427 cache.go:57] Caching tarball of preloaded images
	I0329 17:46:18.677911  652427 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:46:18.677993  652427 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 17:46:18.678015  652427 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:46:18.678095  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:18.723196  652427 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:46:18.723227  652427 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:46:18.723246  652427 cache.go:208] Successfully downloaded all kic artifacts
	I0329 17:46:18.723288  652427 start.go:348] acquiring machines lock for multinode-20220329174520-564087-m02: {Name:mk2e91789fb1ab42dd81da420c805ce0e9722cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 17:46:18.723439  652427 start.go:352] acquired machines lock for "multinode-20220329174520-564087-m02" in 125.766µs
	I0329 17:46:18.723467  652427 start.go:90] Provisioning new machine with config: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name:m02 IP: Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:18.723578  652427 start.go:127] createHost starting for "m02" (driver="docker")
	I0329 17:46:18.726752  652427 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0329 17:46:18.726864  652427 start.go:161] libmachine.API.Create for "multinode-20220329174520-564087" (driver="docker")
	I0329 17:46:18.726895  652427 client.go:168] LocalClient.Create starting
	I0329 17:46:18.726980  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 17:46:18.727018  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:46:18.727043  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:46:18.727103  652427 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 17:46:18.727128  652427 main.go:130] libmachine: Decoding PEM data...
	I0329 17:46:18.727146  652427 main.go:130] libmachine: Parsing certificate...
	I0329 17:46:18.727418  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:46:18.759003  652427 network_create.go:75] Found existing network {name:multinode-20220329174520-564087 subnet:0xc000b6ee70 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0329 17:46:18.759045  652427 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20220329174520-564087-m02" container
	I0329 17:46:18.759107  652427 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 17:46:18.790190  652427 cli_runner.go:133] Run: docker volume create multinode-20220329174520-564087-m02 --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --label created_by.minikube.sigs.k8s.io=true
	I0329 17:46:18.822198  652427 oci.go:102] Successfully created a docker volume multinode-20220329174520-564087-m02
	I0329 17:46:18.822278  652427 cli_runner.go:133] Run: docker run --rm --name multinode-20220329174520-564087-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --entrypoint /usr/bin/test -v multinode-20220329174520-564087-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 17:46:19.362957  652427 oci.go:106] Successfully prepared a docker volume multinode-20220329174520-564087-m02
	I0329 17:46:19.363011  652427 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:46:19.363038  652427 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 17:46:19.363116  652427 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 17:46:27.622548  652427 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220329174520-564087-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.259374052s)
	I0329 17:46:27.622582  652427 kic.go:188] duration metric: took 8.259542 seconds to extract preloaded images to volume
	W0329 17:46:27.622637  652427 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 17:46:27.622651  652427 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 17:46:27.622694  652427 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 17:46:27.708276  652427 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220329174520-564087-m02 --name multinode-20220329174520-564087-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220329174520-564087-m02 --network multinode-20220329174520-564087 --ip 192.168.49.3 --volume multinode-20220329174520-564087-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 17:46:28.111983  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Running}}
	I0329 17:46:28.145714  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.180625  652427 cli_runner.go:133] Run: docker exec multinode-20220329174520-564087-m02 stat /var/lib/dpkg/alternatives/iptables
	I0329 17:46:28.243280  652427 oci.go:278] the created container "multinode-20220329174520-564087-m02" has a running status.
	I0329 17:46:28.243315  652427 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa...
	I0329 17:46:28.352947  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0329 17:46:28.353011  652427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 17:46:28.439794  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.474473  652427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 17:46:28.474496  652427 kic_runner.go:114] Args: [docker exec --privileged multinode-20220329174520-564087-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 17:46:28.564773  652427 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:46:28.597591  652427 machine.go:88] provisioning docker machine ...
	I0329 17:46:28.597636  652427 ubuntu.go:169] provisioning hostname "multinode-20220329174520-564087-m02"
	I0329 17:46:28.597701  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:28.632823  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:28.633110  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:28.633138  652427 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20220329174520-564087-m02 && echo "multinode-20220329174520-564087-m02" | sudo tee /etc/hostname
	I0329 17:46:28.766115  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20220329174520-564087-m02
	
	I0329 17:46:28.766206  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:28.797826  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:28.797996  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:28.798026  652427 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220329174520-564087-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220329174520-564087-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220329174520-564087-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 17:46:28.916911  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 17:46:28.916949  652427 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 17:46:28.916974  652427 ubuntu.go:177] setting up certificates
	I0329 17:46:28.916985  652427 provision.go:83] configureAuth start
	I0329 17:46:28.917046  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:28.949268  652427 provision.go:138] copyHostCerts
	I0329 17:46:28.949312  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:46:28.949352  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 17:46:28.949364  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 17:46:28.949439  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 17:46:28.949531  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:46:28.949559  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 17:46:28.949568  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 17:46:28.949603  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 17:46:28.949656  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:46:28.949683  652427 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 17:46:28.949694  652427 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 17:46:28.949724  652427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 17:46:28.949779  652427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.multinode-20220329174520-564087-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220329174520-564087-m02]
	I0329 17:46:29.106466  652427 provision.go:172] copyRemoteCerts
	I0329 17:46:29.106538  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 17:46:29.106577  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.139382  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:29.228429  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0329 17:46:29.228500  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0329 17:46:29.245652  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0329 17:46:29.245711  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0329 17:46:29.262283  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0329 17:46:29.262348  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 17:46:29.279716  652427 provision.go:86] duration metric: configureAuth took 362.712391ms
	I0329 17:46:29.279748  652427 ubuntu.go:193] setting minikube options for container-runtime
	I0329 17:46:29.279938  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:29.279989  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.312282  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.312430  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.312444  652427 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 17:46:29.429214  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 17:46:29.429264  652427 ubuntu.go:71] root file system type: overlay
	I0329 17:46:29.429486  652427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 17:46:29.429555  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.462121  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.462284  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.462382  652427 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 17:46:29.590118  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 17:46:29.590193  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:29.622550  652427 main.go:130] libmachine: Using SSH client type: native
	I0329 17:46:29.622701  652427 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49519 <nil> <nil>}
	I0329 17:46:29.622720  652427 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 17:46:30.245909  652427 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 17:46:29.582120620 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 17:46:30.245951  652427 machine.go:91] provisioned docker machine in 1.648333276s
	I0329 17:46:30.245970  652427 client.go:171] LocalClient.Create took 11.519055245s
	I0329 17:46:30.245989  652427 start.go:169] duration metric: libmachine.API.Create for "multinode-20220329174520-564087" took 11.519124811s
	I0329 17:46:30.246003  652427 start.go:302] post-start starting for "multinode-20220329174520-564087-m02" (driver="docker")
	I0329 17:46:30.246012  652427 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 17:46:30.246078  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 17:46:30.246130  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.277490  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.364671  652427 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 17:46:30.367418  652427 command_runner.go:130] > NAME="Ubuntu"
	I0329 17:46:30.367445  652427 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0329 17:46:30.367453  652427 command_runner.go:130] > ID=ubuntu
	I0329 17:46:30.367460  652427 command_runner.go:130] > ID_LIKE=debian
	I0329 17:46:30.367468  652427 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0329 17:46:30.367474  652427 command_runner.go:130] > VERSION_ID="20.04"
	I0329 17:46:30.367485  652427 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0329 17:46:30.367497  652427 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0329 17:46:30.367506  652427 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0329 17:46:30.367539  652427 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0329 17:46:30.367549  652427 command_runner.go:130] > VERSION_CODENAME=focal
	I0329 17:46:30.367553  652427 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0329 17:46:30.367651  652427 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 17:46:30.367671  652427 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 17:46:30.367686  652427 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 17:46:30.367698  652427 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 17:46:30.367710  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 17:46:30.367769  652427 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 17:46:30.367850  652427 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 17:46:30.367863  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /etc/ssl/certs/5640872.pem
	I0329 17:46:30.367965  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 17:46:30.374493  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:46:30.391498  652427 start.go:305] post-start completed in 145.478012ms
	I0329 17:46:30.391818  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:30.423636  652427 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/config.json ...
	I0329 17:46:30.423930  652427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:46:30.423987  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.455928  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.537369  652427 command_runner.go:130] > 18%!
	(MISSING)I0329 17:46:30.537705  652427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 17:46:30.541289  652427 command_runner.go:130] > 241G
	I0329 17:46:30.541553  652427 start.go:130] duration metric: createHost completed in 11.817959572s
	I0329 17:46:30.541574  652427 start.go:81] releasing machines lock for "multinode-20220329174520-564087-m02", held for 11.818121343s
	I0329 17:46:30.541656  652427 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:46:30.574878  652427 out.go:176] * Found network options:
	I0329 17:46:30.576445  652427 out.go:176]   - NO_PROXY=192.168.49.2
	W0329 17:46:30.576500  652427 proxy.go:118] fail to check proxy env: Error ip not in block
	W0329 17:46:30.576558  652427 proxy.go:118] fail to check proxy env: Error ip not in block
	I0329 17:46:30.576640  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 17:46:30.576692  652427 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 17:46:30.576750  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.576695  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:46:30.609630  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.610562  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:46:30.698938  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:46:30.833371  652427 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0329 17:46:30.833400  652427 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0329 17:46:30.833416  652427 command_runner.go:130] > <H1>302 Moved</H1>
	I0329 17:46:30.833423  652427 command_runner.go:130] > The document has moved
	I0329 17:46:30.833432  652427 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0329 17:46:30.833437  652427 command_runner.go:130] > </BODY></HTML>
	I0329 17:46:30.833498  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:46:30.833506  652427 command_runner.go:130] > [Unit]
	I0329 17:46:30.833512  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:46:30.833518  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:46:30.833528  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:46:30.833538  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:46:30.833549  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:46:30.833560  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:46:30.833570  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:46:30.833576  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:46:30.833586  652427 command_runner.go:130] > [Service]
	I0329 17:46:30.833598  652427 command_runner.go:130] > Type=notify
	I0329 17:46:30.833601  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:46:30.833611  652427 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0329 17:46:30.833623  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:46:30.833639  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:46:30.833653  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:46:30.833667  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:46:30.833680  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:46:30.833694  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:46:30.833710  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:46:30.833726  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:46:30.833743  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:46:30.833753  652427 command_runner.go:130] > ExecStart=
	I0329 17:46:30.833775  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:46:30.833787  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:46:30.833798  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:46:30.833812  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:46:30.833819  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:46:30.833829  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:46:30.833838  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:46:30.833847  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:46:30.833858  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:46:30.833867  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:46:30.833873  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:46:30.833884  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:46:30.833888  652427 command_runner.go:130] > Delegate=yes
	I0329 17:46:30.833900  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:46:30.833909  652427 command_runner.go:130] > KillMode=process
	I0329 17:46:30.833917  652427 command_runner.go:130] > [Install]
	I0329 17:46:30.833924  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:46:30.833949  652427 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 17:46:30.834006  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 17:46:30.843529  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 17:46:30.855086  652427 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0329 17:46:30.855110  652427 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0329 17:46:30.855881  652427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 17:46:30.932521  652427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 17:46:31.008655  652427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 17:46:31.018037  652427 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0329 17:46:31.018058  652427 command_runner.go:130] > [Unit]
	I0329 17:46:31.018064  652427 command_runner.go:130] > Description=Docker Application Container Engine
	I0329 17:46:31.018069  652427 command_runner.go:130] > Documentation=https://docs.docker.com
	I0329 17:46:31.018073  652427 command_runner.go:130] > BindsTo=containerd.service
	I0329 17:46:31.018081  652427 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0329 17:46:31.018091  652427 command_runner.go:130] > Wants=network-online.target
	I0329 17:46:31.018104  652427 command_runner.go:130] > Requires=docker.socket
	I0329 17:46:31.018114  652427 command_runner.go:130] > StartLimitBurst=3
	I0329 17:46:31.018120  652427 command_runner.go:130] > StartLimitIntervalSec=60
	I0329 17:46:31.018127  652427 command_runner.go:130] > [Service]
	I0329 17:46:31.018131  652427 command_runner.go:130] > Type=notify
	I0329 17:46:31.018136  652427 command_runner.go:130] > Restart=on-failure
	I0329 17:46:31.018141  652427 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0329 17:46:31.018151  652427 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0329 17:46:31.018160  652427 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0329 17:46:31.018170  652427 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0329 17:46:31.018184  652427 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0329 17:46:31.018199  652427 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0329 17:46:31.018213  652427 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0329 17:46:31.018226  652427 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0329 17:46:31.018240  652427 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0329 17:46:31.018250  652427 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0329 17:46:31.018259  652427 command_runner.go:130] > ExecStart=
	I0329 17:46:31.018276  652427 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0329 17:46:31.018291  652427 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0329 17:46:31.018302  652427 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0329 17:46:31.018316  652427 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0329 17:46:31.018327  652427 command_runner.go:130] > LimitNOFILE=infinity
	I0329 17:46:31.018335  652427 command_runner.go:130] > LimitNPROC=infinity
	I0329 17:46:31.018339  652427 command_runner.go:130] > LimitCORE=infinity
	I0329 17:46:31.018347  652427 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0329 17:46:31.018352  652427 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0329 17:46:31.018358  652427 command_runner.go:130] > TasksMax=infinity
	I0329 17:46:31.018363  652427 command_runner.go:130] > TimeoutStartSec=0
	I0329 17:46:31.018373  652427 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0329 17:46:31.018383  652427 command_runner.go:130] > Delegate=yes
	I0329 17:46:31.018397  652427 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0329 17:46:31.018408  652427 command_runner.go:130] > KillMode=process
	I0329 17:46:31.018428  652427 command_runner.go:130] > [Install]
	I0329 17:46:31.018439  652427 command_runner.go:130] > WantedBy=multi-user.target
	I0329 17:46:31.018493  652427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 17:46:31.094457  652427 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 17:46:31.103969  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:46:31.141113  652427 command_runner.go:130] > 20.10.13
	I0329 17:46:31.142886  652427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 17:46:31.180277  652427 command_runner.go:130] > 20.10.13
	I0329 17:46:31.185191  652427 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 17:46:31.186476  652427 out.go:176]   - env NO_PROXY=192.168.49.2
	I0329 17:46:31.186531  652427 cli_runner.go:133] Run: docker network inspect multinode-20220329174520-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 17:46:31.218241  652427 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0329 17:46:31.221554  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:46:31.231052  652427 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087 for IP: 192.168.49.3
	I0329 17:46:31.231159  652427 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 17:46:31.231201  652427 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 17:46:31.231215  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0329 17:46:31.231226  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0329 17:46:31.231238  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0329 17:46:31.231249  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0329 17:46:31.231296  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 17:46:31.231330  652427 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 17:46:31.231349  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 17:46:31.231372  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 17:46:31.231394  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 17:46:31.231416  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 17:46:31.231452  652427 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 17:46:31.231479  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem -> /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.231488  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.231503  652427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.231880  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 17:46:31.248809  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 17:46:31.265517  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 17:46:31.282382  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 17:46:31.298857  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 17:46:31.315403  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 17:46:31.332058  652427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 17:46:31.348770  652427 ssh_runner.go:195] Run: openssl version
	I0329 17:46:31.353423  652427 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0329 17:46:31.353480  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 17:46:31.360293  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363230  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363359  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.363408  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 17:46:31.367738  652427 command_runner.go:130] > 51391683
	I0329 17:46:31.367944  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 17:46:31.374845  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 17:46:31.381870  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384584  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384689  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.384738  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 17:46:31.389169  652427 command_runner.go:130] > 3ec20f2e
	I0329 17:46:31.389423  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 17:46:31.396284  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 17:46:31.403143  652427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.405952  652427 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.406065  652427 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.406102  652427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 17:46:31.410661  652427 command_runner.go:130] > b5213941
	I0329 17:46:31.410719  652427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 17:46:31.417818  652427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 17:46:31.496155  652427 command_runner.go:130] > cgroupfs
	I0329 17:46:31.498030  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:46:31.498045  652427 cni.go:154] 2 nodes found, recommending kindnet
	I0329 17:46:31.498058  652427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 17:46:31.498074  652427 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220329174520-564087 NodeName:multinode-20220329174520-564087-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 17:46:31.498180  652427 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220329174520-564087-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 17:46:31.498239  652427 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220329174520-564087-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 17:46:31.498290  652427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 17:46:31.505442  652427 command_runner.go:130] > kubeadm
	I0329 17:46:31.505465  652427 command_runner.go:130] > kubectl
	I0329 17:46:31.505470  652427 command_runner.go:130] > kubelet
	I0329 17:46:31.505491  652427 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 17:46:31.505541  652427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0329 17:46:31.512273  652427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I0329 17:46:31.524469  652427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 17:46:31.536739  652427 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 17:46:31.539551  652427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 17:46:31.549072  652427 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:46:31.549306  652427 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:46:31.549364  652427 start.go:282] JoinCluster: &{Name:multinode-20220329174520-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:multinode-20220329174520-564087 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:46:31.549449  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0329 17:46:31.549494  652427 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:46:31.580977  652427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:46:31.711236  652427 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 
	I0329 17:46:31.715409  652427 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:31.715460  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220329174520-564087-m02"
	I0329 17:46:31.921830  652427 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1021-gcp\n", err: exit status 1
	I0329 17:46:31.989345  652427 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0329 17:46:37.964200  652427 command_runner.go:130] > [preflight] Running pre-flight checks
	I0329 17:46:37.964226  652427 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0329 17:46:37.964233  652427 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1021-gcp
	I0329 17:46:37.964237  652427 command_runner.go:130] > DOCKER_VERSION: 20.10.13
	I0329 17:46:37.964246  652427 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0329 17:46:37.964258  652427 command_runner.go:130] > OS: Linux
	I0329 17:46:37.964266  652427 command_runner.go:130] > CGROUPS_CPU: enabled
	I0329 17:46:37.964278  652427 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0329 17:46:37.964289  652427 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0329 17:46:37.964307  652427 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0329 17:46:37.964316  652427 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0329 17:46:37.964321  652427 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0329 17:46:37.964327  652427 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0329 17:46:37.964334  652427 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0329 17:46:37.964339  652427 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0329 17:46:37.964349  652427 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0329 17:46:37.964359  652427 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0329 17:46:37.964366  652427 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0329 17:46:37.964374  652427 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0329 17:46:37.964385  652427 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0329 17:46:37.964392  652427 command_runner.go:130] > This node has joined the cluster:
	I0329 17:46:37.964398  652427 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0329 17:46:37.964408  652427 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0329 17:46:37.964418  652427 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0329 17:46:37.964438  652427 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4etg3b.08xa8aumglz9h3at --discovery-token-ca-cert-hash sha256:8242f97a683f4e9219cd05f2b79b4985e9ef8625a214ed5c4c5ead77332786a9 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220329174520-564087-m02": (6.248966616s)
	I0329 17:46:37.964458  652427 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0329 17:46:38.132714  652427 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0329 17:46:38.132755  652427 start.go:284] JoinCluster complete in 6.583390289s
	I0329 17:46:38.132766  652427 cni.go:93] Creating CNI manager for ""
	I0329 17:46:38.132775  652427 cni.go:154] 2 nodes found, recommending kindnet
	I0329 17:46:38.132829  652427 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0329 17:46:38.136185  652427 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0329 17:46:38.136207  652427 command_runner.go:130] >   Size: 2675000   	Blocks: 5232       IO Block: 4096   regular file
	I0329 17:46:38.136214  652427 command_runner.go:130] > Device: 34h/52d	Inode: 8004372     Links: 1
	I0329 17:46:38.136220  652427 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0329 17:46:38.136225  652427 command_runner.go:130] > Access: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:46:38.136231  652427 command_runner.go:130] > Modify: 2021-08-11 19:10:31.000000000 +0000
	I0329 17:46:38.136235  652427 command_runner.go:130] > Change: 2022-03-21 20:07:13.664642338 +0000
	I0329 17:46:38.136239  652427 command_runner.go:130] >  Birth: -
	I0329 17:46:38.136316  652427 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 17:46:38.136333  652427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0329 17:46:38.148856  652427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 17:46:38.285034  652427 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0329 17:46:38.285081  652427 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0329 17:46:38.285090  652427 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0329 17:46:38.285097  652427 command_runner.go:130] > daemonset.apps/kindnet configured
	I0329 17:46:38.285134  652427 start.go:208] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0329 17:46:38.287285  652427 out.go:176] * Verifying Kubernetes components...
	I0329 17:46:38.287341  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:38.297040  652427 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:46:38.297379  652427 kapi.go:59] client config for multinode-20220329174520-564087: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode-20220329174520-564087/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/multinode
-20220329174520-564087/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x167ac60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0329 17:46:38.297695  652427 node_ready.go:35] waiting up to 6m0s for node "multinode-20220329174520-564087-m02" to be "Ready" ...
	I0329 17:46:38.297758  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:38.297766  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:38.297772  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:38.299797  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:38.299814  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:38.299819  652427 round_trippers.go:580]     Audit-Id: 9ed3346a-8583-43a5-bfc3-981f89b068bc
	I0329 17:46:38.299824  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:38.299828  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:38.299832  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:38.299836  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:38.299841  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:38 GMT
	I0329 17:46:38.299938  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:38.801017  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:38.801047  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:38.801082  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:38.803485  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:38.803506  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:38.803512  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:38 GMT
	I0329 17:46:38.803516  652427 round_trippers.go:580]     Audit-Id: 9e1d8393-dad9-44f3-ae3a-976c82ed7ce2
	I0329 17:46:38.803521  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:38.803525  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:38.803530  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:38.803534  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:38.803629  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:39.301256  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:39.301278  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:39.301285  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:39.303396  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:39.303416  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:39.303422  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:39 GMT
	I0329 17:46:39.303426  652427 round_trippers.go:580]     Audit-Id: 456b299e-347e-49bb-9709-7fbf84e791c2
	I0329 17:46:39.303430  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:39.303434  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:39.303439  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:39.303443  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:39.303522  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:39.801216  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:39.801239  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:39.801256  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:39.803912  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:39.803938  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:39.803947  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:39 GMT
	I0329 17:46:39.803955  652427 round_trippers.go:580]     Audit-Id: c7012e48-072b-4332-918c-35a936875441
	I0329 17:46:39.803963  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:39.803971  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:39.803984  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:39.803991  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:39.804105  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:40.300413  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:40.300435  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:40.300442  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:40.302602  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:40.302622  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:40.302630  652427 round_trippers.go:580]     Audit-Id: ab242fc9-aec1-45c6-a423-59cfde21eced
	I0329 17:46:40.302638  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:40.302645  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:40.302651  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:40.302657  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:40.302664  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:40 GMT
	I0329 17:46:40.302775  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:40.303085  652427 node_ready.go:58] node "multinode-20220329174520-564087-m02" has status "Ready":"False"
	I0329 17:46:40.800430  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:40.800487  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:40.800499  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:40.803017  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:40.803045  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:40.803055  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:40.803062  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:40.803069  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:40 GMT
	I0329 17:46:40.803080  652427 round_trippers.go:580]     Audit-Id: 7765c00f-d344-4c76-a107-e01a50f363a9
	I0329 17:46:40.803090  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:40.803101  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:40.803223  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:41.300692  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:41.300718  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:41.300728  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:41.303390  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:41.303417  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:41.303428  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:41.303436  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:41.303443  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:41.303451  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:41.303463  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:41 GMT
	I0329 17:46:41.303474  652427 round_trippers.go:580]     Audit-Id: 9af8188b-83c5-4116-831e-262f07849a0b
	I0329 17:46:41.303603  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:41.801147  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:41.801171  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:41.801181  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:41.802847  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:41.802871  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:41.802879  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:41.802885  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:41.802896  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:41.802903  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:41 GMT
	I0329 17:46:41.802919  652427 round_trippers.go:580]     Audit-Id: 795a6e34-8a29-4035-8216-c4f80c32c844
	I0329 17:46:41.802929  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:41.803043  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:42.300581  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:42.300608  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:42.300617  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:42.302837  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:42.302859  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:42.302867  652427 round_trippers.go:580]     Audit-Id: 09fe99d6-4a61-4ac3-a7f3-eb82ad08369f
	I0329 17:46:42.302875  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:42.302882  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:42.302888  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:42.302894  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:42.302900  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:42 GMT
	I0329 17:46:42.302999  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:42.303283  652427 node_ready.go:58] node "multinode-20220329174520-564087-m02" has status "Ready":"False"
	I0329 17:46:42.800600  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:42.800627  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:42.800634  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:42.803009  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:42.803031  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:42.803038  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:42.803042  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:42.803047  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:42 GMT
	I0329 17:46:42.803051  652427 round_trippers.go:580]     Audit-Id: 2b9d3025-853f-4611-b763-e473d841bcd7
	I0329 17:46:42.803055  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:42.803059  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:42.803193  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:43.300809  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:43.300835  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:43.300843  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:43.303032  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:43.303055  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:43.303064  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:43.303071  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:43.303078  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:43.303085  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:43 GMT
	I0329 17:46:43.303096  652427 round_trippers.go:580]     Audit-Id: 8b42898e-bb8b-4233-a388-98e03b7bafa5
	I0329 17:46:43.303102  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:43.303214  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:43.800773  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:43.800799  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:43.800806  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:43.803188  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:43.803207  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:43.803216  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:43 GMT
	I0329 17:46:43.803223  652427 round_trippers.go:580]     Audit-Id: 7778e186-b125-4fbd-a697-a484e631b7bd
	I0329 17:46:43.803230  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:43.803237  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:43.803248  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:43.803252  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:43.803375  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"560","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:message":{}}}}},"subresource":"status"},{"manager":"Go-http-client","operation":"U
pdate","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"Fi [truncated 4394 chars]
	I0329 17:46:44.300464  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:44.300495  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.300504  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.302786  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.302817  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.302827  652427 round_trippers.go:580]     Audit-Id: 1fe7df79-f0b1-42e6-83d9-c022421fed27
	I0329 17:46:44.302834  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.302842  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.302851  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.302865  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.302872  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.303002  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"573","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.alpha.kubernetes.io/cri-socket":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os"
:{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernet [truncated 4429 chars]
	I0329 17:46:44.303345  652427 node_ready.go:49] node "multinode-20220329174520-564087-m02" has status "Ready":"True"
	I0329 17:46:44.303367  652427 node_ready.go:38] duration metric: took 6.005654305s waiting for node "multinode-20220329174520-564087-m02" to be "Ready" ...
	I0329 17:46:44.303377  652427 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:44.303439  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0329 17:46:44.303449  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.303456  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.306030  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.306048  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.306053  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.306058  652427 round_trippers.go:580]     Audit-Id: 4c627f37-9990-4c01-9b79-61a0e561565e
	I0329 17:46:44.306062  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.306067  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.306072  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.306076  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.306543  652427 request.go:1181] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 69183 chars]
	I0329 17:46:44.308707  652427 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.308780  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-6tcql
	I0329 17:46:44.308792  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.308803  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.310448  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.310470  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.310479  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.310496  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.310502  652427 round_trippers.go:580]     Audit-Id: 786c23a6-9f1c-420c-9489-92d88f2e926c
	I0329 17:46:44.310510  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.310521  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.310532  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.310646  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-6tcql","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2","resourceVersion":"499","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"68e110e0-9803-497f-a89b-69bf6538d2ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68e110e0-9803-497f-a89b-69bf6538d2ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5986 chars]
	I0329 17:46:44.311012  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.311024  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.311031  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.312477  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.312494  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.312503  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.312510  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.312517  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.312523  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.312531  652427 round_trippers.go:580]     Audit-Id: 05e711fa-a117-4fc2-9020-f1cc213b7b5f
	I0329 17:46:44.312547  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.312663  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.312951  652427 pod_ready.go:92] pod "coredns-64897985d-6tcql" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.312964  652427 pod_ready.go:81] duration metric: took 4.235056ms waiting for pod "coredns-64897985d-6tcql" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.312972  652427 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.313010  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220329174520-564087
	I0329 17:46:44.313019  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.313025  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.314608  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.314629  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.314637  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.314645  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.314652  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.314660  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.314668  652427 round_trippers.go:580]     Audit-Id: 54f8b45f-dfb9-477b-98de-bac98b015d50
	I0329 17:46:44.314672  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.314810  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220329174520-564087","namespace":"kube-system","uid":"ac5cd989-3ac7-4d02-94c0-0c2843391dfe","resourceVersion":"433","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.mirror":"3a75749bd4b871de0c4b2bec21cffac5","kubernetes.io/config.seen":"2022-03-29T17:45:44.427846936Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"Fiel
dsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube [truncated 5818 chars]
	I0329 17:46:44.315152  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.315164  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.315171  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.316638  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.316657  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.316670  652427 round_trippers.go:580]     Audit-Id: ceadb479-8a1c-4ac1-bdd1-dc3e4a2ee72a
	I0329 17:46:44.316677  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.316685  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.316695  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.316703  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.316715  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.316807  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.317099  652427 pod_ready.go:92] pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.317114  652427 pod_ready.go:81] duration metric: took 4.135789ms waiting for pod "etcd-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.317131  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.317181  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220329174520-564087
	I0329 17:46:44.317191  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.317200  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.318684  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.318699  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.318705  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.318709  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.318714  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.318721  652427 round_trippers.go:580]     Audit-Id: c541c22d-2540-4459-bd7f-d72d6f23d26c
	I0329 17:46:44.318735  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.318742  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.318871  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220329174520-564087","namespace":"kube-system","uid":"112c5d83-654f-4235-9e38-a435d3f2d433","resourceVersion":"363","creationTimestamp":"2022-03-29T17:45:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.mirror":"27de21fd79a687dd5ac855c0b6b9898c","kubernetes.io/config.seen":"2022-03-29T17:45:37.376573317Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17
:45:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 8327 chars]
	I0329 17:46:44.319240  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.319255  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.319265  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.320591  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.320614  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.320622  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.320630  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.320638  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.320652  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.320660  652427 round_trippers.go:580]     Audit-Id: 59443bad-5834-4b07-b56c-6ffa0b39bcb9
	I0329 17:46:44.320671  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.320755  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.321007  652427 pod_ready.go:92] pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.321020  652427 pod_ready.go:81] duration metric: took 3.878271ms waiting for pod "kube-apiserver-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.321029  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.321090  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220329174520-564087
	I0329 17:46:44.321101  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.321106  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.322554  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.322574  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.322582  652427 round_trippers.go:580]     Audit-Id: bff57800-df0a-44d0-9dab-3058b46c38da
	I0329 17:46:44.322589  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.322595  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.322607  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.322613  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.322628  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.322738  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220329174520-564087","namespace":"kube-system","uid":"66589d1b-e363-4195-bbc3-4ff12b3bf3cf","resourceVersion":"370","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.mirror":"5f30e0b2d37ae23fdc738fd92896e2de","kubernetes.io/config.seen":"2022-03-29T17:45:44.427888140Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 7902 chars]
	I0329 17:46:44.323105  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.323118  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.323124  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.324414  652427 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0329 17:46:44.324429  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.324434  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.324440  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.324444  652427 round_trippers.go:580]     Audit-Id: 49b6745a-a2cc-481c-af87-d590a566744c
	I0329 17:46:44.324448  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.324455  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.324461  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.324615  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.324852  652427 pod_ready.go:92] pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.324864  652427 pod_ready.go:81] duration metric: took 3.830211ms waiting for pod "kube-controller-manager-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.324872  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.501261  652427 request.go:597] Waited for 176.328506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:44.501316  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29kjv
	I0329 17:46:44.501321  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.501331  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.503511  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.503534  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.503541  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.503546  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.503551  652427 round_trippers.go:580]     Audit-Id: afc98a7b-2642-49d3-92e9-6c1e188fb8ec
	I0329 17:46:44.503556  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.503561  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.503568  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.503731  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29kjv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ca1dbe90-6525-4660-81a7-68b2c47378da","resourceVersion":"468","creationTimestamp":"2022-03-29T17:45:57Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5551 chars]
	I0329 17:46:44.701508  652427 request.go:597] Waited for 197.342922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.701580  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:44.701586  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.701593  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.703909  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.703931  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.703940  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.703948  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.703955  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.703963  652427 round_trippers.go:580]     Audit-Id: 76f26ed9-3ce7-4640-b6f3-3a68644659a1
	I0329 17:46:44.703970  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.703979  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.704096  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:44.704517  652427 pod_ready.go:92] pod "kube-proxy-29kjv" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:44.704533  652427 pod_ready.go:81] duration metric: took 379.655564ms waiting for pod "kube-proxy-29kjv" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.704545  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cww7z" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:44.901340  652427 request.go:597] Waited for 196.719477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cww7z
	I0329 17:46:44.901398  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cww7z
	I0329 17:46:44.901403  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:44.901413  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:44.903787  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:44.903810  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:44.903819  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:44.903827  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:44 GMT
	I0329 17:46:44.903834  652427 round_trippers.go:580]     Audit-Id: deefd7ca-0c80-4b17-858a-f1e7d465150b
	I0329 17:46:44.903841  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:44.903848  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:44.903853  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:44.903983  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cww7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"3f51eeab-69b9-40eb-87db-67785022f8e2","resourceVersion":"556","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"controller-revision-hash":"8455b5959d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"27cb158d-aed9-4d83-a6c4-788f687069bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27cb158d-aed9-4d83-a6c4-788f687069bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5559 chars]
	I0329 17:46:45.100792  652427 request.go:597] Waited for 196.346103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:45.100862  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087-m02
	I0329 17:46:45.100868  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.100876  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.103106  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.103134  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.103143  652427 round_trippers.go:580]     Audit-Id: 27dc1ef8-8b6a-4a6e-a22c-4a12caebe9b8
	I0329 17:46:45.103151  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.103162  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.103168  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.103175  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.103186  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.103290  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087-m02","uid":"41767f6d-5fe5-418e-8d96-600cfd4d4104","resourceVersion":"573","creationTimestamp":"2022-03-29T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:46:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.alpha.kubernetes.io/cri-socket":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os"
:{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernet [truncated 4429 chars]
	I0329 17:46:45.103612  652427 pod_ready.go:92] pod "kube-proxy-cww7z" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:45.103623  652427 pod_ready.go:81] duration metric: took 399.065611ms waiting for pod "kube-proxy-cww7z" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.103631  652427 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.301039  652427 request.go:597] Waited for 197.342623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:45.301121  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220329174520-564087
	I0329 17:46:45.301126  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.301134  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.303446  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.303470  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.303476  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.303481  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.303486  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.303495  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.303499  652427 round_trippers.go:580]     Audit-Id: 1bbaa79f-563f-44d1-adbd-4c5e117b109f
	I0329 17:46:45.303504  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.303611  652427 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220329174520-564087","namespace":"kube-system","uid":"4ba1ded4-06a3-44a7-922f-b02863ff0da0","resourceVersion":"369","creationTimestamp":"2022-03-29T17:45:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.mirror":"ada4753661f69c3f9eb0dea379f83828","kubernetes.io/config.seen":"2022-03-29T17:45:44.427890891Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","controller":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-03-29T17:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:
kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kub [truncated 4784 chars]
	I0329 17:46:45.501025  652427 request.go:597] Waited for 196.934621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:45.501119  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220329174520-564087
	I0329 17:46:45.501131  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.501142  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.503598  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.503618  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.503626  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.503633  652427 round_trippers.go:580]     Audit-Id: c2c5126f-c7bd-4da7-b6e4-1bf2dc20c829
	I0329 17:46:45.503641  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.503648  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.503655  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.503666  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.503771  652427 request.go:1181] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach
":"true"},"managedFields":[{"manager":"Go-http-client","operation":"Upd [truncated 5243 chars]
	I0329 17:46:45.504184  652427 pod_ready.go:92] pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 17:46:45.504203  652427 pod_ready.go:81] duration metric: took 400.563209ms waiting for pod "kube-scheduler-multinode-20220329174520-564087" in "kube-system" namespace to be "Ready" ...
	I0329 17:46:45.504213  652427 pod_ready.go:38] duration metric: took 1.20081773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 17:46:45.504241  652427 system_svc.go:44] waiting for kubelet service to be running ....
	I0329 17:46:45.504297  652427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:46:45.513916  652427 system_svc.go:56] duration metric: took 9.669505ms WaitForService to wait for kubelet.
	I0329 17:46:45.513941  652427 kubeadm.go:548] duration metric: took 7.228773031s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0329 17:46:45.513961  652427 node_conditions.go:102] verifying NodePressure condition ...
	I0329 17:46:45.701415  652427 request.go:597] Waited for 187.354646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:45.701485  652427 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0329 17:46:45.701491  652427 round_trippers.go:469] Request Headers:
	I0329 17:46:45.701503  652427 round_trippers.go:473]     Accept: application/json, */*
	I0329 17:46:45.703911  652427 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0329 17:46:45.703935  652427 round_trippers.go:577] Response Headers:
	I0329 17:46:45.703942  652427 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f11b663-3756-4d6e-bb3d-181b915934b4
	I0329 17:46:45.703947  652427 round_trippers.go:580]     Date: Tue, 29 Mar 2022 17:46:45 GMT
	I0329 17:46:45.703956  652427 round_trippers.go:580]     Audit-Id: 50a63ad2-28f7-47f6-9363-0a1a0ecf766f
	I0329 17:46:45.703963  652427 round_trippers.go:580]     Cache-Control: no-cache, private
	I0329 17:46:45.703977  652427 round_trippers.go:580]     Content-Type: application/json
	I0329 17:46:45.703985  652427 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9f9debaa-1dd3-4fdc-a45b-82e9f42f8ba1
	I0329 17:46:45.704161  652427 request.go:1181] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"575"},"items":[{"metadata":{"name":"multinode-20220329174520-564087","uid":"0a183d28-dfeb-4d40-be0e-504c6f664b1b","resourceVersion":"481","creationTimestamp":"2022-03-29T17:45:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220329174520-564087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"923781973407d6dc536f326caa216e4920fd75c3","minikube.k8s.io/name":"multinode-20220329174520-564087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_03_29T17_45_45_0700","minikube.k8s.io/version":"v1.25.2","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","vol
umes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi [truncated 10717 chars]
	I0329 17:46:45.704622  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:45.704639  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:45.704650  652427 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0329 17:46:45.704654  652427 node_conditions.go:123] node cpu capacity is 8
	I0329 17:46:45.704658  652427 node_conditions.go:105] duration metric: took 190.693449ms to run NodePressure ...
	I0329 17:46:45.704668  652427 start.go:213] waiting for startup goroutines ...
	I0329 17:46:45.738572  652427 start.go:498] kubectl: 1.23.5, cluster: 1.23.5 (minor skew: 0)
	I0329 17:46:45.740746  652427 out.go:176] * Done! kubectl is now configured to use "multinode-20220329174520-564087" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:45:30 UTC, end at Tue 2022-03-29 17:54:55 UTC. --
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[214]: time="2022-03-29T17:45:31.806176191Z" level=info msg="Daemon shutdown complete"
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: docker.service: Succeeded.
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Stopped Docker Application Container Engine.
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Starting Docker Application Container Engine...
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.849726597Z" level=info msg="Starting up"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851663663Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851688615Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851714897Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.851724738Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852880813Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852911935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852932126Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.852954703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.858635838Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863255611Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863276301Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863281546Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.863429327Z" level=info msg="Loading containers: start."
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.941376619Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.973804350Z" level=info msg="Loading containers: done."
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.984414784Z" level=info msg="Docker daemon" commit=906f57f graphdriver(s)=overlay2 version=20.10.13
	Mar 29 17:45:31 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:31.984477950Z" level=info msg="Daemon has completed initialization"
	Mar 29 17:45:31 multinode-20220329174520-564087 systemd[1]: Started Docker Application Container Engine.
	Mar 29 17:45:32 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:32.001133060Z" level=info msg="API listen on [::]:2376"
	Mar 29 17:45:32 multinode-20220329174520-564087 dockerd[459]: time="2022-03-29T17:45:32.004517067Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	30d9cee296bef       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago       Running             busybox                   0                   d80189d9f4f6c
	cca2695fb4971       a4ca41631cc7a                                                                                         8 minutes ago       Running             coredns                   0                   0679bc810aadd
	4b576a888064c       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   c87d2b87926b1
	d50e3f59b2ce9       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c              8 minutes ago       Running             kindnet-cni               0                   2e96d073b624d
	17bbc3cf565ae       3c53fa8541f95                                                                                         8 minutes ago       Running             kube-proxy                0                   fd8b515a73b16
	b7d139996016a       3fc1d62d65872                                                                                         9 minutes ago       Running             kube-apiserver            0                   815a884ca3b74
	aff007f20f144       25f8c7f3da61c                                                                                         9 minutes ago       Running             etcd                      0                   8ab8a1d1f3db1
	c36ea01d8947b       b0c9e5e4dbb14                                                                                         9 minutes ago       Running             kube-controller-manager   0                   79225c76a0a7c
	9180528fcd7d6       884d49d6d8c9f                                                                                         9 minutes ago       Running             kube-scheduler            0                   d037fed5efd16
	
	* 
	* ==> coredns [cca2695fb497] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220329174520-564087
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220329174520-564087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=multinode-20220329174520-564087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_45_45_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:45:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220329174520-564087
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:54:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:52:24 +0000   Tue, 29 Mar 2022 17:46:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20220329174520-564087
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                c6a4332a-c343-40f9-a72a-fc1b4f5a5f06
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-cbpdd                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 coredns-64897985d-6tcql                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m58s
	  kube-system                 etcd-multinode-20220329174520-564087                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 kindnet-7hm65                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m58s
	  kube-system                 kube-apiserver-multinode-20220329174520-564087             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-controller-manager-multinode-20220329174520-564087    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 kube-proxy-29kjv                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-scheduler-multinode-20220329174520-564087             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m56s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  9m18s (x4 over 9m18s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s (x4 over 9m18s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s (x3 over 9m18s)  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m11s                  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m11s                  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m11s                  kubelet     Node multinode-20220329174520-564087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m40s                  kubelet     Node multinode-20220329174520-564087 status is now: NodeReady
	
	
	Name:               multinode-20220329174520-564087-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220329174520-564087-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220329174520-564087-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:54:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 17:52:09 +0000   Tue, 29 Mar 2022 17:46:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20220329174520-564087-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                7a1b4424-c2ff-4b69-97d2-491c41ec39a6
	  Boot ID:                    b9773761-6fd5-4dc5-89e9-c6bdd61e4f8f
	  Kernel Version:             5.13.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-bgzlj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kindnet-vp76g               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m22s
	  kube-system                 kube-proxy-cww7z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m19s                  kube-proxy  
	  Normal  Starting                 8m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m22s (x2 over 8m22s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x2 over 8m22s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x2 over 8m22s)  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node multinode-20220329174520-564087-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.374153] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.001833] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[Mar29 17:54] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.003601] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004869] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004805] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.001490] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004839] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.004838] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.001917] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000007] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.002733] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	[  +5.003649] IPv4: martian source 10.244.0.235 from 10.244.0.3, on dev br-aff26c540dc6
	[  +0.000006] ll header: 00000000: 02 42 ab 68 83 f4 02 42 c0 a8 31 02 08 00
	
	* 
	* ==> etcd [aff007f20f14] <==
	* {"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:multinode-20220329174520-564087 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:45:39.480Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.481Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:45:39.482Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:45:39.482Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.924274ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128011987960469521 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:477 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128011987960469519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[1682928115] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"183.598769ms","start":"2022-03-29T17:46:23.048Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[1682928115] 'read index received'  (duration: 87.004511ms)","trace[1682928115] 'applied index is now lower than readState.Index'  (duration: 96.593273ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[2023804428] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"359.546149ms","start":"2022-03-29T17:46:22.873Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[2023804428] 'process raft request'  (duration: 165.064256ms)","trace[2023804428] 'compare'  (duration: 193.811363ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.732616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:46:23.232Z","caller":"traceutil/trace.go:171","msg":"trace[1007111003] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"183.765438ms","start":"2022-03-29T17:46:23.048Z","end":"2022-03-29T17:46:23.232Z","steps":["trace[1007111003] 'agreement among raft nodes before linearized reading'  (duration: 183.656295ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:46:23.232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T17:46:22.872Z","time spent":"359.667331ms","remote":"127.0.0.1:34902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:477 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128011987960469519 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"info","ts":"2022-03-29T17:46:24.173Z","caller":"traceutil/trace.go:171","msg":"trace[1074712309] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:531; }","duration":"124.451109ms","start":"2022-03-29T17:46:24.049Z","end":"2022-03-29T17:46:24.173Z","steps":["trace[1074712309] 'read index received'  (duration: 124.430089ms)","trace[1074712309] 'applied index is now lower than readState.Index'  (duration: 18.679µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:46:24.275Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"226.645468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:46:24.275Z","caller":"traceutil/trace.go:171","msg":"trace[332262182] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:509; }","duration":"226.734293ms","start":"2022-03-29T17:46:24.049Z","end":"2022-03-29T17:46:24.275Z","steps":["trace[332262182] 'agreement among raft nodes before linearized reading'  (duration: 124.556797ms)","trace[332262182] 'range keys from in-memory index tree'  (duration: 102.063021ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:54:55 up  2:37,  0 users,  load average: 0.30, 0.34, 0.55
	Linux multinode-20220329174520-564087 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b7d139996016] <==
	* I0329 17:45:41.543920       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0329 17:45:41.543965       1 cache.go:39] Caches are synced for autoregister controller
	I0329 17:45:41.543971       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0329 17:45:41.543988       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0329 17:45:41.545569       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0329 17:45:41.557601       1 controller.go:611] quota admission added evaluator for: namespaces
	I0329 17:45:42.415713       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 17:45:42.415754       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 17:45:42.420806       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 17:45:42.423826       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 17:45:42.423842       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 17:45:42.772732       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 17:45:42.800352       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 17:45:42.875483       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 17:45:42.880152       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0329 17:45:42.881029       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 17:45:42.884400       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 17:45:43.558513       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 17:45:44.248578       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 17:45:44.255064       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 17:45:44.264704       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 17:45:44.456809       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 17:45:57.546128       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0329 17:45:57.562452       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 17:45:58.972746       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [c36ea01d8947] <==
	* I0329 17:45:57.660898       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:45:57.660936       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 17:45:57.660951       1 disruption.go:371] Sending events to api server.
	I0329 17:45:57.744428       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 17:45:57.744552       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 17:45:57.744750       1 shared_informer.go:247] Caches are synced for expand 
	I0329 17:45:57.744901       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:45:57.745409       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:45:57.757625       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0329 17:45:57.872383       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0329 17:45:57.881350       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-c7txq"
	I0329 17:45:58.158220       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:45:58.158249       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:45:58.164401       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:46:17.546673       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0329 17:46:33.586591       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220329174520-564087-m02" does not exist
	I0329 17:46:33.591936       1 range_allocator.go:374] Set node multinode-20220329174520-564087-m02 PodCIDR to [10.244.1.0/24]
	I0329 17:46:33.595738       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cww7z"
	I0329 17:46:33.595769       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vp76g"
	W0329 17:46:37.548383       1 node_lifecycle_controller.go:1012] Missing timestamp for Node multinode-20220329174520-564087-m02. Assuming now as a timestamp.
	I0329 17:46:37.548431       1 event.go:294] "Event occurred" object="multinode-20220329174520-564087-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220329174520-564087-m02 event: Registered Node multinode-20220329174520-564087-m02 in Controller"
	I0329 17:46:46.543461       1 event.go:294] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7978565885 to 2"
	I0329 17:46:46.549046       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-bgzlj"
	I0329 17:46:46.552444       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-cbpdd"
	I0329 17:46:47.558414       1 event.go:294] "Event occurred" object="default/busybox-7978565885-bgzlj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7978565885-bgzlj"
	
	* 
	* ==> kube-proxy [17bbc3cf565a] <==
	* I0329 17:45:58.945092       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:45:58.945172       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:45:58.945209       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:45:58.967823       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:45:58.967857       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:45:58.967867       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:45:58.967893       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:45:58.969670       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:45:58.970267       1 config.go:317] "Starting service config controller"
	I0329 17:45:58.970292       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:45:58.970320       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:45:58.970349       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:45:59.071096       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:45:59.071109       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9180528fcd7d] <==
	* E0329 17:45:41.560233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0329 17:45:41.560266       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:41.560272       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:41.560295       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:41.560300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:41.559981       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:45:41.560321       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:45:41.559663       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 17:45:41.560365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 17:45:41.560501       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:45:41.560547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:45:42.371446       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:45:42.371473       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:45:42.381520       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:45:42.381550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:45:42.427688       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:45:42.427714       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:45:42.511401       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0329 17:45:42.511427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:45:42.525586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:45:42.525626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:45:42.543995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:42.544034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:45:44.012532       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0329 17:45:44.553608       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:45:30 UTC, end at Tue 2022-03-29 17:54:55 UTC. --
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.644920    1922 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.653220    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: E0329 17:45:57.658674    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.659426    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846335    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-lib-modules\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846403    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l82l\" (UniqueName: \"kubernetes.io/projected/ca1dbe90-6525-4660-81a7-68b2c47378da-kube-api-access-5l82l\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846436    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca1dbe90-6525-4660-81a7-68b2c47378da-kube-proxy\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846467    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca1dbe90-6525-4660-81a7-68b2c47378da-lib-modules\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846501    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-cni-cfg\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846524    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d9c821d-cc40-4073-95ab-b810b61210a7-xtables-lock\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846556    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca1dbe90-6525-4660-81a7-68b2c47378da-xtables-lock\") pod \"kube-proxy-29kjv\" (UID: \"ca1dbe90-6525-4660-81a7-68b2c47378da\") " pod="kube-system/kube-proxy-29kjv"
	Mar 29 17:45:57 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:57.846588    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdfm\" (UniqueName: \"kubernetes.io/projected/8d9c821d-cc40-4073-95ab-b810b61210a7-kube-api-access-8qdfm\") pod \"kindnet-7hm65\" (UID: \"8d9c821d-cc40-4073-95ab-b810b61210a7\") " pod="kube-system/kindnet-7hm65"
	Mar 29 17:45:59 multinode-20220329174520-564087 kubelet[1922]: I0329 17:45:59.399736    1922 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Mar 29 17:45:59 multinode-20220329174520-564087 kubelet[1922]: E0329 17:45:59.887924    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:46:04 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:04.400920    1922 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Mar 29 17:46:04 multinode-20220329174520-564087 kubelet[1922]: E0329 17:46:04.898395    1922 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.344652    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.344849    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440200    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rbd\" (UniqueName: \"kubernetes.io/projected/7d9d3f42-beb4-4d9d-82ac-3984ac52c132-kube-api-access-n8rbd\") pod \"storage-provisioner\" (UID: \"7d9d3f42-beb4-4d9d-82ac-3984ac52c132\") " pod="kube-system/storage-provisioner"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440262    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d9d3f42-beb4-4d9d-82ac-3984ac52c132-tmp\") pod \"storage-provisioner\" (UID: \"7d9d3f42-beb4-4d9d-82ac-3984ac52c132\") " pod="kube-system/storage-provisioner"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440359    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q74j\" (UniqueName: \"kubernetes.io/projected/a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2-kube-api-access-5q74j\") pod \"coredns-64897985d-6tcql\" (UID: \"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2\") " pod="kube-system/coredns-64897985d-6tcql"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.440412    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2-config-volume\") pod \"coredns-64897985d-6tcql\" (UID: \"a5e3e718-8c8d-46b6-a815-c9ba5e55dbc2\") " pod="kube-system/coredns-64897985d-6tcql"
	Mar 29 17:46:15 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:15.965211    1922 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0679bc810aadd4b766bcaec4315c4bc3a9c4a9401c9acec103467e82125419cc"
	Mar 29 17:46:46 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:46.556823    1922 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:46:46 multinode-20220329174520-564087 kubelet[1922]: I0329 17:46:46.717263    1922 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/7d54ecec-d81f-404f-8b4f-566eed570a96-kube-api-access-f4dm4\") pod \"busybox-7978565885-cbpdd\" (UID: \"7d54ecec-d81f-404f-8b4f-566eed570a96\") " pod="default/busybox-7978565885-cbpdd"
	
	* 
	* ==> storage-provisioner [4b576a888064] <==
	* I0329 17:46:15.966882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:46:15.976253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:46:15.976304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:46:15.988818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:46:15.989014       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df!
	I0329 17:46:15.989335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e0b69ac-dafe-4f7b-bbd2-dd67c3d402a9", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df became leader
	I0329 17:46:16.089189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220329174520-564087_9ab0d9c4-8635-4458-a854-00a8c7a090df!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220329174520-564087 -n multinode-20220329174520-564087
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20220329174520-564087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20220329174520-564087 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20220329174520-564087 describe pod : exit status 1 (39.115745ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20220329174520-564087 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (123.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (280.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150461008s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:19:18.010529  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:19:20.205115  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 18:19:25.127404  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.152920754s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:19:30.085005  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135711988s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133822681s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136477407s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:20:23.058773  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.064084  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.074352  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.094621  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.134883  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.215189  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.375598  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:23.696156  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:24.336728  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:25.617458  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:28.178244  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:20:33.299178  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138505358s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:20:43.540217  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159550248s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:21:04.021111  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:21:17.157831  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148827697s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133950873s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13916026s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:23:06.903924  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134111676s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (280.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (519.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0329 18:24:18.010995  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:24:18.427229  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.432560  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.442800  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.463094  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.504163  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.584379  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:18.744791  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:19.065124  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:19.705764  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:20.986834  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:23.547221  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:23.719600  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:24:25.127925  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:24:28.668421  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:24:30.085492  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 18:24:38.908800  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m39.010918214s)

                                                
                                                
-- stdout --
	* [calico-20220329180854-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20220329180854-564087 in cluster calico-20220329180854-564087
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 18:24:16.858313  898861 out.go:297] Setting OutFile to fd 1 ...
	I0329 18:24:16.858457  898861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:24:16.858471  898861 out.go:310] Setting ErrFile to fd 2...
	I0329 18:24:16.858478  898861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:24:16.858581  898861 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 18:24:16.858894  898861 out.go:304] Setting JSON to false
	I0329 18:24:16.860430  898861 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11210,"bootTime":1648567047,"procs":574,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 18:24:16.860506  898861 start.go:124] virtualization: kvm guest
	I0329 18:24:16.863106  898861 out.go:176] * [calico-20220329180854-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 18:24:16.864473  898861 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 18:24:16.863284  898861 notify.go:193] Checking for updates...
	I0329 18:24:16.865761  898861 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 18:24:16.867103  898861 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 18:24:16.868463  898861 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 18:24:16.869920  898861 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 18:24:16.870471  898861 config.go:176] Loaded profile config "cilium-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:24:16.870575  898861 config.go:176] Loaded profile config "embed-certs-20220329181004-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:24:16.870648  898861 config.go:176] Loaded profile config "false-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:24:16.870708  898861 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 18:24:16.912892  898861 docker.go:137] docker version: linux-20.10.14
	I0329 18:24:16.913028  898861 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:24:17.007677  898861 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 18:24:16.945872566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:24:17.007800  898861 docker.go:254] overlay module found
	I0329 18:24:17.010445  898861 out.go:176] * Using the docker driver based on user configuration
	I0329 18:24:17.010484  898861 start.go:283] selected driver: docker
	I0329 18:24:17.010492  898861 start.go:800] validating driver "docker" against <nil>
	I0329 18:24:17.010515  898861 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 18:24:17.010598  898861 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 18:24:17.010629  898861 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0329 18:24:17.012175  898861 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 18:24:17.012883  898861 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:24:17.106304  898861 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 18:24:17.045151716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:24:17.106473  898861 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 18:24:17.106667  898861 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 18:24:17.106690  898861 cni.go:93] Creating CNI manager for "calico"
	I0329 18:24:17.106696  898861 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0329 18:24:17.106703  898861 start_flags.go:306] config:
	{Name:calico-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329180854-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 18:24:17.109006  898861 out.go:176] * Starting control plane node calico-20220329180854-564087 in cluster calico-20220329180854-564087
	I0329 18:24:17.109083  898861 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 18:24:17.110563  898861 out.go:176] * Pulling base image ...
	I0329 18:24:17.110601  898861 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:24:17.110633  898861 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 18:24:17.110635  898861 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 18:24:17.110750  898861 cache.go:57] Caching tarball of preloaded images
	I0329 18:24:17.110974  898861 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 18:24:17.111000  898861 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 18:24:17.111144  898861 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/config.json ...
	I0329 18:24:17.111164  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/config.json: {Name:mk455a1f3dc4a32b968d8db6a5db7a15d5db1737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:17.159474  898861 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 18:24:17.159501  898861 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 18:24:17.159515  898861 cache.go:208] Successfully downloaded all kic artifacts
	I0329 18:24:17.159552  898861 start.go:348] acquiring machines lock for calico-20220329180854-564087: {Name:mk54fe051fa5d91d91c68ec4aea0852a75c75551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 18:24:17.159688  898861 start.go:352] acquired machines lock for "calico-20220329180854-564087" in 114.965µs
	I0329 18:24:17.159714  898861 start.go:90] Provisioning new machine with config: &{Name:calico-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329180854-564087 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:24:17.159816  898861 start.go:127] createHost starting for "" (driver="docker")
	I0329 18:24:17.162943  898861 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 18:24:17.163247  898861 start.go:161] libmachine.API.Create for "calico-20220329180854-564087" (driver="docker")
	I0329 18:24:17.163289  898861 client.go:168] LocalClient.Create starting
	I0329 18:24:17.163386  898861 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 18:24:17.163426  898861 main.go:130] libmachine: Decoding PEM data...
	I0329 18:24:17.163458  898861 main.go:130] libmachine: Parsing certificate...
	I0329 18:24:17.163530  898861 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 18:24:17.163556  898861 main.go:130] libmachine: Decoding PEM data...
	I0329 18:24:17.163572  898861 main.go:130] libmachine: Parsing certificate...
	I0329 18:24:17.163914  898861 cli_runner.go:133] Run: docker network inspect calico-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 18:24:17.198916  898861 cli_runner.go:180] docker network inspect calico-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 18:24:17.199000  898861 network_create.go:262] running [docker network inspect calico-20220329180854-564087] to gather additional debugging logs...
	I0329 18:24:17.199022  898861 cli_runner.go:133] Run: docker network inspect calico-20220329180854-564087
	W0329 18:24:17.231556  898861 cli_runner.go:180] docker network inspect calico-20220329180854-564087 returned with exit code 1
	I0329 18:24:17.231589  898861 network_create.go:265] error running [docker network inspect calico-20220329180854-564087]: docker network inspect calico-20220329180854-564087: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220329180854-564087
	I0329 18:24:17.231620  898861 network_create.go:267] output of [docker network inspect calico-20220329180854-564087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220329180854-564087
	
	** /stderr **
	I0329 18:24:17.231670  898861 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 18:24:17.265515  898861 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-390c46fea444 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f1:43:b7:38}}
	I0329 18:24:17.266115  898861 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6078a97f5e2a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b0:d8:e4:83}}
	I0329 18:24:17.267045  898861 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-6b8ddbe0c003 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:69:6d:c1:b7}}
	I0329 18:24:17.268099  898861 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0006d0700] misses:0}
	I0329 18:24:17.268154  898861 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 18:24:17.268174  898861 network_create.go:114] attempt to create docker network calico-20220329180854-564087 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0329 18:24:17.268245  898861 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329180854-564087
	I0329 18:24:17.341725  898861 network_create.go:98] docker network calico-20220329180854-564087 192.168.76.0/24 created
	I0329 18:24:17.341772  898861 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220329180854-564087" container
	I0329 18:24:17.341842  898861 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 18:24:17.377039  898861 cli_runner.go:133] Run: docker volume create calico-20220329180854-564087 --label name.minikube.sigs.k8s.io=calico-20220329180854-564087 --label created_by.minikube.sigs.k8s.io=true
	I0329 18:24:17.411307  898861 oci.go:102] Successfully created a docker volume calico-20220329180854-564087
	I0329 18:24:17.411397  898861 cli_runner.go:133] Run: docker run --rm --name calico-20220329180854-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329180854-564087 --entrypoint /usr/bin/test -v calico-20220329180854-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 18:24:17.995292  898861 oci.go:106] Successfully prepared a docker volume calico-20220329180854-564087
	I0329 18:24:17.995365  898861 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:24:17.995393  898861 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 18:24:17.995493  898861 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329180854-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 18:24:23.993105  898861 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329180854-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.99751927s)
	I0329 18:24:23.993140  898861 kic.go:188] duration metric: took 5.997743 seconds to extract preloaded images to volume
	W0329 18:24:23.993227  898861 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 18:24:23.993247  898861 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 18:24:23.993302  898861 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 18:24:24.116113  898861 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329180854-564087 --name calico-20220329180854-564087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329180854-564087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329180854-564087 --network calico-20220329180854-564087 --ip 192.168.76.2 --volume calico-20220329180854-564087:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 18:24:24.681824  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Running}}
	I0329 18:24:24.719491  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:24.759059  898861 cli_runner.go:133] Run: docker exec calico-20220329180854-564087 stat /var/lib/dpkg/alternatives/iptables
	I0329 18:24:24.830812  898861 oci.go:278] the created container "calico-20220329180854-564087" has a running status.
	I0329 18:24:24.830847  898861 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa...
	I0329 18:24:24.974151  898861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 18:24:25.080012  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:25.117569  898861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 18:24:25.117599  898861 kic_runner.go:114] Args: [docker exec --privileged calico-20220329180854-564087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 18:24:25.217962  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:25.253971  898861 machine.go:88] provisioning docker machine ...
	I0329 18:24:25.254024  898861 ubuntu.go:169] provisioning hostname "calico-20220329180854-564087"
	I0329 18:24:25.254092  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:25.290519  898861 main.go:130] libmachine: Using SSH client type: native
	I0329 18:24:25.290719  898861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49726 <nil> <nil>}
	I0329 18:24:25.290736  898861 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220329180854-564087 && echo "calico-20220329180854-564087" | sudo tee /etc/hostname
	I0329 18:24:25.422178  898861 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220329180854-564087
	
	I0329 18:24:25.422275  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:25.456813  898861 main.go:130] libmachine: Using SSH client type: native
	I0329 18:24:25.456987  898861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49726 <nil> <nil>}
	I0329 18:24:25.457024  898861 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220329180854-564087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220329180854-564087/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220329180854-564087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 18:24:25.576936  898861 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 18:24:25.576975  898861 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 18:24:25.577015  898861 ubuntu.go:177] setting up certificates
	I0329 18:24:25.577030  898861 provision.go:83] configureAuth start
	I0329 18:24:25.577105  898861 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329180854-564087
	I0329 18:24:25.610391  898861 provision.go:138] copyHostCerts
	I0329 18:24:25.610453  898861 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 18:24:25.610472  898861 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 18:24:25.610540  898861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 18:24:25.610621  898861 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 18:24:25.610633  898861 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 18:24:25.610661  898861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 18:24:25.610706  898861 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 18:24:25.610715  898861 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 18:24:25.610735  898861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 18:24:25.610778  898861 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.calico-20220329180854-564087 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220329180854-564087]
	I0329 18:24:25.847927  898861 provision.go:172] copyRemoteCerts
	I0329 18:24:25.848002  898861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 18:24:25.848044  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:25.881702  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:25.968530  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 18:24:25.987373  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 18:24:26.005864  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0329 18:24:26.024550  898861 provision.go:86] duration metric: configureAuth took 447.502174ms
	I0329 18:24:26.024576  898861 ubuntu.go:193] setting minikube options for container-runtime
	I0329 18:24:26.024737  898861 config.go:176] Loaded profile config "calico-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:24:26.024784  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:26.062190  898861 main.go:130] libmachine: Using SSH client type: native
	I0329 18:24:26.062336  898861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49726 <nil> <nil>}
	I0329 18:24:26.062352  898861 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 18:24:26.181353  898861 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 18:24:26.181384  898861 ubuntu.go:71] root file system type: overlay
	I0329 18:24:26.181563  898861 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 18:24:26.181625  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:26.216405  898861 main.go:130] libmachine: Using SSH client type: native
	I0329 18:24:26.216565  898861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49726 <nil> <nil>}
	I0329 18:24:26.216644  898861 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 18:24:26.346450  898861 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 18:24:26.346545  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:26.382455  898861 main.go:130] libmachine: Using SSH client type: native
	I0329 18:24:26.382631  898861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49726 <nil> <nil>}
	I0329 18:24:26.382661  898861 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 18:24:27.148508  898861 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 18:24:26.342501837 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 18:24:27.148547  898861 machine.go:91] provisioned docker machine in 1.894546732s
	I0329 18:24:27.148557  898861 client.go:171] LocalClient.Create took 9.985259011s
	I0329 18:24:27.148577  898861 start.go:169] duration metric: libmachine.API.Create for "calico-20220329180854-564087" took 9.98533049s
	I0329 18:24:27.148590  898861 start.go:302] post-start starting for "calico-20220329180854-564087" (driver="docker")
	I0329 18:24:27.148601  898861 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 18:24:27.148663  898861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 18:24:27.148702  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:27.183300  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:27.272879  898861 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 18:24:27.275910  898861 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 18:24:27.275939  898861 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 18:24:27.275950  898861 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 18:24:27.275959  898861 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 18:24:27.275972  898861 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 18:24:27.276037  898861 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 18:24:27.276135  898861 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 18:24:27.276246  898861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 18:24:27.283294  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 18:24:27.301453  898861 start.go:305] post-start completed in 152.843491ms
	I0329 18:24:27.301821  898861 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329180854-564087
	I0329 18:24:27.336947  898861 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/config.json ...
	I0329 18:24:27.337223  898861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 18:24:27.337264  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:27.375506  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:27.465792  898861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 18:24:27.469785  898861 start.go:130] duration metric: createHost completed in 10.309957871s
	I0329 18:24:27.469808  898861 start.go:81] releasing machines lock for "calico-20220329180854-564087", held for 10.31010615s
	I0329 18:24:27.469895  898861 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329180854-564087
	I0329 18:24:27.507421  898861 ssh_runner.go:195] Run: systemctl --version
	I0329 18:24:27.507481  898861 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 18:24:27.507485  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:27.507531  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:27.546360  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:27.546816  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:27.633374  898861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 18:24:27.776505  898861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:24:27.787299  898861 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 18:24:27.787357  898861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 18:24:27.796983  898861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 18:24:27.809735  898861 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 18:24:27.888288  898861 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 18:24:27.964816  898861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:24:27.975753  898861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 18:24:28.068281  898861 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 18:24:28.078005  898861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:24:28.116896  898861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:24:28.160250  898861 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 18:24:28.160363  898861 cli_runner.go:133] Run: docker network inspect calico-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 18:24:28.197632  898861 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0329 18:24:28.201044  898861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:24:28.210866  898861 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:24:28.210933  898861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:24:28.243693  898861 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:24:28.243722  898861 docker.go:537] Images already preloaded, skipping extraction
	I0329 18:24:28.243777  898861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:24:28.275686  898861 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:24:28.275726  898861 cache_images.go:84] Images are preloaded, skipping loading
	I0329 18:24:28.275782  898861 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 18:24:28.363258  898861 cni.go:93] Creating CNI manager for "calico"
	I0329 18:24:28.363291  898861 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 18:24:28.363307  898861 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220329180854-564087 NodeName:calico-20220329180854-564087 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 18:24:28.363476  898861 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220329180854-564087"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 18:24:28.363621  898861 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220329180854-564087 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220329180854-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0329 18:24:28.363684  898861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 18:24:28.371997  898861 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 18:24:28.372055  898861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 18:24:28.378987  898861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0329 18:24:28.392556  898861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 18:24:28.405471  898861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0329 18:24:28.418970  898861 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0329 18:24:28.421900  898861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:24:28.432695  898861 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087 for IP: 192.168.76.2
	I0329 18:24:28.432819  898861 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 18:24:28.432881  898861 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 18:24:28.432941  898861 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.key
	I0329 18:24:28.432959  898861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.crt with IP's: []
	I0329 18:24:29.206001  898861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.crt ...
	I0329 18:24:29.206040  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.crt: {Name:mkbdc962b4c8757cfbc1796893658e4209c96662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.206232  898861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.key ...
	I0329 18:24:29.206246  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/client.key: {Name:mkbf051ddaa39cfe1bc7c5ea7be3a4da624fb5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.206338  898861 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key.31bdca25
	I0329 18:24:29.206357  898861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 18:24:29.510189  898861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt.31bdca25 ...
	I0329 18:24:29.510226  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt.31bdca25: {Name:mk5de3b35531599b87c111c10c4481db55167b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.527431  898861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key.31bdca25 ...
	I0329 18:24:29.527467  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key.31bdca25: {Name:mk2304c87cb2b8a4b47237292121b7f3d307f6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.527648  898861 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt
	I0329 18:24:29.527715  898861 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key
	I0329 18:24:29.527757  898861 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.key
	I0329 18:24:29.527769  898861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.crt with IP's: []
	I0329 18:24:29.731226  898861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.crt ...
	I0329 18:24:29.731261  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.crt: {Name:mkb7ef4931fc79a140e6ae72059d9588d6fc63e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.731443  898861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.key ...
	I0329 18:24:29.731457  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.key: {Name:mk0da0233b4e278d584ffc3354becc83c892859a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:29.731630  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 18:24:29.731692  898861 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 18:24:29.731711  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 18:24:29.731746  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 18:24:29.731779  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 18:24:29.731808  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 18:24:29.731848  898861 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 18:24:29.732374  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 18:24:29.755635  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0329 18:24:29.777432  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 18:24:29.798713  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329180854-564087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 18:24:29.815618  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 18:24:29.832618  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 18:24:29.852389  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 18:24:29.875569  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 18:24:29.898134  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 18:24:29.930966  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 18:24:29.952543  898861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 18:24:29.974245  898861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 18:24:29.990853  898861 ssh_runner.go:195] Run: openssl version
	I0329 18:24:29.997524  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 18:24:30.006530  898861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 18:24:30.009719  898861 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 18:24:30.009782  898861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 18:24:30.015428  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 18:24:30.022802  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 18:24:30.030025  898861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 18:24:30.033090  898861 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 18:24:30.033182  898861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 18:24:30.037978  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 18:24:30.045987  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 18:24:30.055371  898861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:24:30.059418  898861 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:24:30.059474  898861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:24:30.065701  898861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 18:24:30.074048  898861 kubeadm.go:391] StartCluster: {Name:calico-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329180854-564087 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false}
	I0329 18:24:30.074212  898861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 18:24:30.109797  898861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 18:24:30.117352  898861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 18:24:30.124378  898861 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 18:24:30.124440  898861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 18:24:30.131707  898861 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 18:24:30.131747  898861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 18:24:30.734335  898861 out.go:203]   - Generating certificates and keys ...
	I0329 18:24:33.749859  898861 out.go:203]   - Booting up control plane ...
	I0329 18:24:41.804859  898861 out.go:203]   - Configuring RBAC rules ...
	I0329 18:24:42.221777  898861 cni.go:93] Creating CNI manager for "calico"
	I0329 18:24:42.223465  898861 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0329 18:24:42.223703  898861 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 18:24:42.223730  898861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0329 18:24:42.268582  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 18:24:43.758465  898861 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.489842413s)
	I0329 18:24:43.758513  898861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 18:24:43.758620  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:43.758620  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=calico-20220329180854-564087 minikube.k8s.io/updated_at=2022_03_29T18_24_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:43.851765  898861 ops.go:34] apiserver oom_adj: -16
	I0329 18:24:43.851844  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:44.408880  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:44.908990  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:45.409313  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:45.909375  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:46.409001  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:46.909122  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:47.409418  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:47.909544  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:48.409456  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:48.909298  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:49.408850  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:49.909037  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:50.409352  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:50.908772  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:51.408925  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:51.908736  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:52.408893  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:52.909026  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:53.409027  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:53.908727  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:54.409620  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:54.908677  898861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:24:55.147773  898861 kubeadm.go:1020] duration metric: took 11.389226398s to wait for elevateKubeSystemPrivileges.
	I0329 18:24:55.147815  898861 kubeadm.go:393] StartCluster complete in 25.073777656s
	I0329 18:24:55.147836  898861 settings.go:142] acquiring lock: {Name:mkf193dd78851319876bf7c47a47f525125a4fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:55.147965  898861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 18:24:55.150519  898861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mke8ff89e3fadc84c0cca24c5855d2fcb9124f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:24:55.668912  898861 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220329180854-564087" rescaled to 1
	I0329 18:24:55.668976  898861 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:24:55.671243  898861 out.go:176] * Verifying Kubernetes components...
	I0329 18:24:55.669034  898861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 18:24:55.671303  898861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 18:24:55.669037  898861 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 18:24:55.671420  898861 addons.go:65] Setting storage-provisioner=true in profile "calico-20220329180854-564087"
	I0329 18:24:55.669271  898861 config.go:176] Loaded profile config "calico-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:24:55.671445  898861 addons.go:65] Setting default-storageclass=true in profile "calico-20220329180854-564087"
	I0329 18:24:55.671470  898861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220329180854-564087"
	I0329 18:24:55.671498  898861 addons.go:153] Setting addon storage-provisioner=true in "calico-20220329180854-564087"
	W0329 18:24:55.671521  898861 addons.go:165] addon storage-provisioner should already be in state true
	I0329 18:24:55.671550  898861 host.go:66] Checking if "calico-20220329180854-564087" exists ...
	I0329 18:24:55.671870  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:55.672017  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:55.714908  898861 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 18:24:55.715065  898861 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:24:55.715085  898861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 18:24:55.715145  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:55.718070  898861 addons.go:153] Setting addon default-storageclass=true in "calico-20220329180854-564087"
	W0329 18:24:55.718094  898861 addons.go:165] addon default-storageclass should already be in state true
	I0329 18:24:55.718123  898861 host.go:66] Checking if "calico-20220329180854-564087" exists ...
	I0329 18:24:55.718539  898861 cli_runner.go:133] Run: docker container inspect calico-20220329180854-564087 --format={{.State.Status}}
	I0329 18:24:55.749285  898861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 18:24:55.750196  898861 node_ready.go:35] waiting up to 5m0s for node "calico-20220329180854-564087" to be "Ready" ...
	I0329 18:24:55.753229  898861 node_ready.go:49] node "calico-20220329180854-564087" has status "Ready":"True"
	I0329 18:24:55.753246  898861 node_ready.go:38] duration metric: took 3.017763ms waiting for node "calico-20220329180854-564087" to be "Ready" ...
	I0329 18:24:55.753256  898861 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 18:24:55.755775  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:55.762223  898861 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace to be "Ready" ...
	I0329 18:24:55.763606  898861 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 18:24:55.763641  898861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 18:24:55.763695  898861 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329180854-564087
	I0329 18:24:55.807702  898861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49726 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/calico-20220329180854-564087/id_rsa Username:docker}
	I0329 18:24:55.956786  898861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:24:55.964142  898861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 18:24:57.349379  898861 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.600047042s)
	I0329 18:24:57.349474  898861 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0329 18:24:57.382783  898861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.425950284s)
	I0329 18:24:57.382851  898861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.418680594s)
	I0329 18:24:57.384371  898861 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 18:24:57.384398  898861 addons.go:417] enableAddons completed in 1.715368442s
	I0329 18:24:57.773697  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:24:59.773882  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:01.774844  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:04.273914  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:06.273991  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:08.274358  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:10.841616  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:13.275008  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:15.774600  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:17.774690  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:20.275343  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:22.774123  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:24.774887  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:27.273911  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:29.274494  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:31.774080  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:33.774314  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:35.776718  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:38.275318  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:41.114921  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:43.273790  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:45.274917  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:47.275164  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:49.775066  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:51.775627  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:54.275159  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:56.775052  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:58.775314  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:01.273773  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:03.274645  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:05.773478  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:07.774400  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:10.273896  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:12.274382  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:14.774668  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:16.779016  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:19.274772  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:21.275657  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:23.775079  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:26.273410  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:28.273496  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:30.274922  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:32.774609  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:35.273397  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:37.274006  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:39.773533  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:41.774261  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:44.274699  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:46.776065  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:49.274441  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:51.773622  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:54.273271  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:56.274675  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:58.773952  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:00.774725  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:03.274091  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:05.274124  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:07.774924  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:10.273916  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:12.773585  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:14.773876  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:17.273874  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:19.274298  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:21.773936  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:23.775153  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:26.274329  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:28.274796  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:30.774509  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:33.276548  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:35.774463  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:37.775264  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:40.273517  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:42.274131  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:44.274735  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:46.776667  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:49.273532  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:51.773709  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:53.774660  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:55.775436  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:58.274106  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:00.773382  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:03.274470  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:05.274687  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:07.774795  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:09.775052  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:12.274435  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:14.774765  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:16.774897  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:19.274886  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:21.275610  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:23.774512  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:25.777106  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:28.273790  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:30.274010  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:32.274487  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:34.274652  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:36.773938  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:38.775002  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:41.274815  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:43.774144  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:45.774243  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:47.774751  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:50.274479  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:52.773169  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:54.774410  898861 pod_ready.go:102] pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:55.778363  898861 pod_ready.go:81] duration metric: took 4m0.016096118s waiting for pod "calico-kube-controllers-8594699699-wwgjw" in "kube-system" namespace to be "Ready" ...
	E0329 18:28:55.778387  898861 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 18:28:55.778396  898861 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-tfnvq" in "kube-system" namespace to be "Ready" ...
	I0329 18:28:57.789480  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:59.793020  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:02.290170  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:04.793233  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:07.288653  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:09.289035  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:11.289587  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:13.290705  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:15.789527  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:17.790320  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:19.790724  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:22.289134  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:24.289559  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:26.291807  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:28.789314  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:30.789987  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:32.790641  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:35.290566  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:37.790626  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:40.289697  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:42.290116  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:44.791044  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:46.793457  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:49.290412  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:51.791349  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:53.791484  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:56.289554  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:58.790381  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:00.791852  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:03.289685  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:05.289816  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:07.290755  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:09.789787  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:12.290096  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:14.849216  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:17.290476  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:19.789661  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:22.290015  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:24.290670  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:26.789423  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:28.789460  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:30.790455  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:33.288699  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:35.289888  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:37.290003  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:39.290126  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:41.789470  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:43.790386  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:46.289922  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:48.290444  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:50.789491  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:53.289560  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:55.290866  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:57.790329  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:00.290021  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:02.290247  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:04.850542  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:07.347194  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:09.793285  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:12.290339  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:14.790196  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:16.791492  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:19.290666  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:21.290774  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:23.789345  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:25.790365  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:27.790434  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:29.790574  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:31.794479  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:34.289337  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:36.292161  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:38.791522  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:41.290373  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:43.345112  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:45.790728  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:47.790990  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:49.793102  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:52.289816  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:54.290714  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:56.789920  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:58.794438  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:01.289563  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:03.790391  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:06.289358  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:08.791477  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:11.289573  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:13.290051  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:15.291340  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:17.294558  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:19.790856  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:21.791026  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:24.289431  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:26.789257  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:28.792397  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:31.289238  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:33.290140  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:35.290634  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:37.791902  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:40.290479  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:42.789906  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:44.790496  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:46.792136  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:49.345978  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:51.790769  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:54.289436  898861 pod_ready.go:102] pod "calico-node-tfnvq" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:55.799410  898861 pod_ready.go:81] duration metric: took 4m0.020996963s waiting for pod "calico-node-tfnvq" in "kube-system" namespace to be "Ready" ...
	E0329 18:32:55.799436  898861 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 18:32:55.799454  898861 pod_ready.go:38] duration metric: took 8m0.046184994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 18:32:55.802238  898861 out.go:176] 
	W0329 18:32:55.802391  898861 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0329 18:32:55.802409  898861 out.go:241] * 
	* 
	W0329 18:32:55.803220  898861 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0329 18:32:55.804778  898861 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (519.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (522.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m42.086219026s)

                                                
                                                
-- stdout --
	* [custom-weave-20220329180854-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node custom-weave-20220329180854-564087 in cluster custom-weave-20220329180854-564087
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 18:25:05.466513  910617 out.go:297] Setting OutFile to fd 1 ...
	I0329 18:25:05.466652  910617 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:25:05.466668  910617 out.go:310] Setting ErrFile to fd 2...
	I0329 18:25:05.466675  910617 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:25:05.466785  910617 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 18:25:05.467117  910617 out.go:304] Setting JSON to false
	I0329 18:25:05.468669  910617 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11259,"bootTime":1648567047,"procs":589,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 18:25:05.468757  910617 start.go:124] virtualization: kvm guest
	I0329 18:25:05.472522  910617 out.go:176] * [custom-weave-20220329180854-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 18:25:05.473828  910617 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 18:25:05.472722  910617 notify.go:193] Checking for updates...
	I0329 18:25:05.475087  910617 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 18:25:05.476435  910617 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 18:25:05.477811  910617 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 18:25:05.479157  910617 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 18:25:05.479602  910617 config.go:176] Loaded profile config "calico-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:25:05.479687  910617 config.go:176] Loaded profile config "cilium-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:25:05.479766  910617 config.go:176] Loaded profile config "false-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:25:05.479825  910617 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 18:25:05.523997  910617 docker.go:137] docker version: linux-20.10.14
	I0329 18:25:05.524096  910617 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:25:05.622670  910617 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 18:25:05.555464705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:25:05.622789  910617 docker.go:254] overlay module found
	I0329 18:25:05.625218  910617 out.go:176] * Using the docker driver based on user configuration
	I0329 18:25:05.625269  910617 start.go:283] selected driver: docker
	I0329 18:25:05.625276  910617 start.go:800] validating driver "docker" against <nil>
	I0329 18:25:05.625302  910617 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 18:25:05.625368  910617 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 18:25:05.625394  910617 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0329 18:25:05.628969  910617 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 18:25:05.629678  910617 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:25:05.723460  910617 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 18:25:05.660522862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:25:05.723636  910617 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 18:25:05.723869  910617 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 18:25:05.723901  910617 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 18:25:05.723922  910617 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0329 18:25:05.723937  910617 start_flags.go:306] config:
	{Name:custom-weave-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329180854-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 18:25:05.726255  910617 out.go:176] * Starting control plane node custom-weave-20220329180854-564087 in cluster custom-weave-20220329180854-564087
	I0329 18:25:05.726291  910617 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 18:25:05.727613  910617 out.go:176] * Pulling base image ...
	I0329 18:25:05.727643  910617 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:25:05.727676  910617 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 18:25:05.727699  910617 cache.go:57] Caching tarball of preloaded images
	I0329 18:25:05.727732  910617 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 18:25:05.727948  910617 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 18:25:05.727963  910617 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 18:25:05.728122  910617 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/config.json ...
	I0329 18:25:05.728154  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/config.json: {Name:mk9a2aef3a63b088ce24741d1658372995b85410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:05.774709  910617 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 18:25:05.774738  910617 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 18:25:05.774748  910617 cache.go:208] Successfully downloaded all kic artifacts
	I0329 18:25:05.774788  910617 start.go:348] acquiring machines lock for custom-weave-20220329180854-564087: {Name:mkfc49a208614fe365b48c836cc5e80420c77e1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 18:25:05.774935  910617 start.go:352] acquired machines lock for "custom-weave-20220329180854-564087" in 124.084µs
	I0329 18:25:05.774962  910617 start.go:90] Provisioning new machine with config: &{Name:custom-weave-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329180854-564087 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:25:05.775068  910617 start.go:127] createHost starting for "" (driver="docker")
	I0329 18:25:05.777369  910617 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 18:25:05.777643  910617 start.go:161] libmachine.API.Create for "custom-weave-20220329180854-564087" (driver="docker")
	I0329 18:25:05.777680  910617 client.go:168] LocalClient.Create starting
	I0329 18:25:05.777761  910617 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 18:25:05.777798  910617 main.go:130] libmachine: Decoding PEM data...
	I0329 18:25:05.777824  910617 main.go:130] libmachine: Parsing certificate...
	I0329 18:25:05.777899  910617 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 18:25:05.777930  910617 main.go:130] libmachine: Decoding PEM data...
	I0329 18:25:05.777949  910617 main.go:130] libmachine: Parsing certificate...
	I0329 18:25:05.778351  910617 cli_runner.go:133] Run: docker network inspect custom-weave-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 18:25:05.817919  910617 cli_runner.go:180] docker network inspect custom-weave-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 18:25:05.818007  910617 network_create.go:262] running [docker network inspect custom-weave-20220329180854-564087] to gather additional debugging logs...
	I0329 18:25:05.818034  910617 cli_runner.go:133] Run: docker network inspect custom-weave-20220329180854-564087
	W0329 18:25:05.850386  910617 cli_runner.go:180] docker network inspect custom-weave-20220329180854-564087 returned with exit code 1
	I0329 18:25:05.850425  910617 network_create.go:265] error running [docker network inspect custom-weave-20220329180854-564087]: docker network inspect custom-weave-20220329180854-564087: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220329180854-564087
	I0329 18:25:05.850439  910617 network_create.go:267] output of [docker network inspect custom-weave-20220329180854-564087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220329180854-564087
	
	** /stderr **
	I0329 18:25:05.850494  910617 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 18:25:05.886461  910617 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-390c46fea444 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f1:43:b7:38}}
	I0329 18:25:05.887189  910617 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000130428] misses:0}
	I0329 18:25:05.887240  910617 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 18:25:05.887254  910617 network_create.go:114] attempt to create docker network custom-weave-20220329180854-564087 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0329 18:25:05.887310  910617 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220329180854-564087
	I0329 18:25:05.959726  910617 network_create.go:98] docker network custom-weave-20220329180854-564087 192.168.58.0/24 created
	I0329 18:25:05.959772  910617 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20220329180854-564087" container
	I0329 18:25:05.959840  910617 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 18:25:05.995486  910617 cli_runner.go:133] Run: docker volume create custom-weave-20220329180854-564087 --label name.minikube.sigs.k8s.io=custom-weave-20220329180854-564087 --label created_by.minikube.sigs.k8s.io=true
	I0329 18:25:06.032019  910617 oci.go:102] Successfully created a docker volume custom-weave-20220329180854-564087
	I0329 18:25:06.032099  910617 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220329180854-564087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220329180854-564087 --entrypoint /usr/bin/test -v custom-weave-20220329180854-564087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 18:25:06.636722  910617 oci.go:106] Successfully prepared a docker volume custom-weave-20220329180854-564087
	I0329 18:25:06.636770  910617 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:25:06.636795  910617 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 18:25:06.636864  910617 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220329180854-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 18:25:12.866718  910617 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220329180854-564087:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (6.229792139s)
	I0329 18:25:12.866764  910617 kic.go:188] duration metric: took 6.229964 seconds to extract preloaded images to volume
	W0329 18:25:12.866815  910617 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0329 18:25:12.866828  910617 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0329 18:25:12.866892  910617 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 18:25:13.015310  910617 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220329180854-564087 --name custom-weave-20220329180854-564087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220329180854-564087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220329180854-564087 --network custom-weave-20220329180854-564087 --ip 192.168.58.2 --volume custom-weave-20220329180854-564087:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 18:25:13.510567  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Running}}
	I0329 18:25:13.553977  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:13.592414  910617 cli_runner.go:133] Run: docker exec custom-weave-20220329180854-564087 stat /var/lib/dpkg/alternatives/iptables
	I0329 18:25:13.664237  910617 oci.go:278] the created container "custom-weave-20220329180854-564087" has a running status.
	I0329 18:25:13.664275  910617 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa...
	I0329 18:25:13.839720  910617 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 18:25:13.930414  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:13.971685  910617 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 18:25:13.971711  910617 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220329180854-564087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 18:25:14.067859  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:14.108619  910617 machine.go:88] provisioning docker machine ...
	I0329 18:25:14.108668  910617 ubuntu.go:169] provisioning hostname "custom-weave-20220329180854-564087"
	I0329 18:25:14.108732  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:14.147952  910617 main.go:130] libmachine: Using SSH client type: native
	I0329 18:25:14.148180  910617 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49731 <nil> <nil>}
	I0329 18:25:14.148210  910617 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220329180854-564087 && echo "custom-weave-20220329180854-564087" | sudo tee /etc/hostname
	I0329 18:25:14.287437  910617 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220329180854-564087
	
	I0329 18:25:14.287508  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:14.328615  910617 main.go:130] libmachine: Using SSH client type: native
	I0329 18:25:14.328883  910617 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49731 <nil> <nil>}
	I0329 18:25:14.328913  910617 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220329180854-564087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220329180854-564087/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220329180854-564087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 18:25:14.453880  910617 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 18:25:14.453916  910617 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 18:25:14.453943  910617 ubuntu.go:177] setting up certificates
	I0329 18:25:14.453956  910617 provision.go:83] configureAuth start
	I0329 18:25:14.454033  910617 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329180854-564087
	I0329 18:25:14.500726  910617 provision.go:138] copyHostCerts
	I0329 18:25:14.500795  910617 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 18:25:14.500803  910617 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 18:25:14.500860  910617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 18:25:14.500950  910617 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 18:25:14.500966  910617 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 18:25:14.500989  910617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 18:25:14.501093  910617 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 18:25:14.501110  910617 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 18:25:14.501138  910617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 18:25:14.501194  910617 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220329180854-564087 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220329180854-564087]
	I0329 18:25:14.767623  910617 provision.go:172] copyRemoteCerts
	I0329 18:25:14.767697  910617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 18:25:14.767741  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:14.812346  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:14.900886  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0329 18:25:14.918865  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 18:25:14.936129  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 18:25:14.954085  910617 provision.go:86] duration metric: configureAuth took 500.108213ms
	I0329 18:25:14.954120  910617 ubuntu.go:193] setting minikube options for container-runtime
	I0329 18:25:14.954301  910617 config.go:176] Loaded profile config "custom-weave-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:25:14.954361  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:14.988949  910617 main.go:130] libmachine: Using SSH client type: native
	I0329 18:25:14.989247  910617 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49731 <nil> <nil>}
	I0329 18:25:14.989274  910617 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 18:25:15.109375  910617 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 18:25:15.109413  910617 ubuntu.go:71] root file system type: overlay
	I0329 18:25:15.109653  910617 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 18:25:15.109721  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:15.144507  910617 main.go:130] libmachine: Using SSH client type: native
	I0329 18:25:15.144691  910617 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49731 <nil> <nil>}
	I0329 18:25:15.144751  910617 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 18:25:15.277502  910617 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 18:25:15.277598  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:15.318690  910617 main.go:130] libmachine: Using SSH client type: native
	I0329 18:25:15.318859  910617 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil>  [] 0s} 127.0.0.1 49731 <nil> <nil>}
	I0329 18:25:15.318876  910617 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 18:25:16.059493  910617 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 18:25:15.269948786 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 18:25:16.059533  910617 machine.go:91] provisioned docker machine in 1.950884444s
	I0329 18:25:16.059547  910617 client.go:171] LocalClient.Create took 10.281855302s
	I0329 18:25:16.059570  910617 start.go:169] duration metric: libmachine.API.Create for "custom-weave-20220329180854-564087" took 10.281929346s
	I0329 18:25:16.059594  910617 start.go:302] post-start starting for "custom-weave-20220329180854-564087" (driver="docker")
	I0329 18:25:16.059606  910617 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 18:25:16.059675  910617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 18:25:16.059728  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:16.095945  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:16.185321  910617 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 18:25:16.188068  910617 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 18:25:16.188096  910617 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 18:25:16.188109  910617 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 18:25:16.188116  910617 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 18:25:16.188128  910617 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 18:25:16.188188  910617 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 18:25:16.188288  910617 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem -> 5640872.pem in /etc/ssl/certs
	I0329 18:25:16.188394  910617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 18:25:16.195267  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 18:25:16.212872  910617 start.go:305] post-start completed in 153.261314ms
	I0329 18:25:16.213262  910617 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329180854-564087
	I0329 18:25:16.255074  910617 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/config.json ...
	I0329 18:25:16.255331  910617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 18:25:16.255384  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:16.294669  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:16.381593  910617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 18:25:16.385494  910617 start.go:130] duration metric: createHost completed in 10.610412525s
	I0329 18:25:16.385522  910617 start.go:81] releasing machines lock for "custom-weave-20220329180854-564087", held for 10.610574217s
	I0329 18:25:16.385613  910617 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329180854-564087
	I0329 18:25:16.424936  910617 ssh_runner.go:195] Run: systemctl --version
	I0329 18:25:16.424997  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:16.425021  910617 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 18:25:16.425105  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:16.465026  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:16.465516  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:16.696462  910617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 18:25:16.708524  910617 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:25:16.718502  910617 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 18:25:16.718570  910617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 18:25:16.727715  910617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 18:25:16.740530  910617 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 18:25:16.830621  910617 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 18:25:16.921173  910617 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:25:16.931021  910617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 18:25:17.020615  910617 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 18:25:17.030767  910617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:25:17.076968  910617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:25:17.122219  910617 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 18:25:17.122321  910617 cli_runner.go:133] Run: docker network inspect custom-weave-20220329180854-564087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 18:25:17.160816  910617 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0329 18:25:17.164572  910617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:25:17.175075  910617 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:25:17.175135  910617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:25:17.208969  910617 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:25:17.209002  910617 docker.go:537] Images already preloaded, skipping extraction
	I0329 18:25:17.209531  910617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:25:17.248122  910617 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:25:17.248155  910617 cache_images.go:84] Images are preloaded, skipping loading
	I0329 18:25:17.248212  910617 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 18:25:17.351622  910617 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 18:25:17.351665  910617 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 18:25:17.351685  910617 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220329180854-564087 NodeName:custom-weave-20220329180854-564087 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 18:25:17.351860  910617 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220329180854-564087"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 18:25:17.351981  910617 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220329180854-564087 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329180854-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0329 18:25:17.352045  910617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 18:25:17.361523  910617 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 18:25:17.361607  910617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 18:25:17.370482  910617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0329 18:25:17.387760  910617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 18:25:17.401833  910617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0329 18:25:17.415073  910617 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0329 18:25:17.418019  910617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:25:17.426997  910617 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087 for IP: 192.168.58.2
	I0329 18:25:17.427093  910617 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 18:25:17.427127  910617 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 18:25:17.427173  910617 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.key
	I0329 18:25:17.427189  910617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.crt with IP's: []
	I0329 18:25:17.535079  910617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.crt ...
	I0329 18:25:17.535114  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.crt: {Name:mk948d8867376f7f0f1d052efccf985fdcae3ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.535312  910617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.key ...
	I0329 18:25:17.535326  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/client.key: {Name:mk34e919915f8820f8ddf90c65d983d9d96b00c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.535410  910617 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key.cee25041
	I0329 18:25:17.535426  910617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 18:25:17.772464  910617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt.cee25041 ...
	I0329 18:25:17.772503  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt.cee25041: {Name:mk09cc8276bf9468a4f2792d059c32056ae2222b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.772715  910617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key.cee25041 ...
	I0329 18:25:17.772739  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key.cee25041: {Name:mkb697a1e9821c3a5a2bd14113c42f91fef46e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.772866  910617 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt
	I0329 18:25:17.772948  910617 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key
	I0329 18:25:17.773015  910617 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.key
	I0329 18:25:17.773030  910617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.crt with IP's: []
	I0329 18:25:17.991326  910617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.crt ...
	I0329 18:25:17.991373  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.crt: {Name:mk5073b09ab850e06ef10fb6d81007515d490067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.991625  910617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.key ...
	I0329 18:25:17.991650  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.key: {Name:mk3ba79a627fc9e86319bcb7d05c995f98c8420e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:17.991930  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem (1338 bytes)
	W0329 18:25:17.991989  910617 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087_empty.pem, impossibly tiny 0 bytes
	I0329 18:25:17.992008  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1679 bytes)
	I0329 18:25:17.992050  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 18:25:17.992088  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 18:25:17.992124  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 18:25:17.992187  910617 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem (1708 bytes)
	I0329 18:25:17.993012  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 18:25:18.013164  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 18:25:18.031444  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 18:25:18.050345  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329180854-564087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 18:25:18.071780  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 18:25:18.094934  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0329 18:25:18.116374  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 18:25:18.140615  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0329 18:25:18.164241  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/5640872.pem --> /usr/share/ca-certificates/5640872.pem (1708 bytes)
	I0329 18:25:18.185226  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 18:25:18.203607  910617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/564087.pem --> /usr/share/ca-certificates/564087.pem (1338 bytes)
	I0329 18:25:18.221822  910617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 18:25:18.235055  910617 ssh_runner.go:195] Run: openssl version
	I0329 18:25:18.239960  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/564087.pem && ln -fs /usr/share/ca-certificates/564087.pem /etc/ssl/certs/564087.pem"
	I0329 18:25:18.247626  910617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/564087.pem
	I0329 18:25:18.251728  910617 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:19 /usr/share/ca-certificates/564087.pem
	I0329 18:25:18.251787  910617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/564087.pem
	I0329 18:25:18.258365  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/564087.pem /etc/ssl/certs/51391683.0"
	I0329 18:25:18.267434  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5640872.pem && ln -fs /usr/share/ca-certificates/5640872.pem /etc/ssl/certs/5640872.pem"
	I0329 18:25:18.277258  910617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5640872.pem
	I0329 18:25:18.281009  910617 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:19 /usr/share/ca-certificates/5640872.pem
	I0329 18:25:18.281126  910617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5640872.pem
	I0329 18:25:18.286949  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5640872.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 18:25:18.295123  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 18:25:18.303664  910617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:25:18.306950  910617 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:25:18.307011  910617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:25:18.312985  910617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 18:25:18.321245  910617 kubeadm.go:391] StartCluster: {Name:custom-weave-20220329180854-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329180854-564087 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false}
	I0329 18:25:18.321387  910617 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 18:25:18.363244  910617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 18:25:18.372849  910617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 18:25:18.382012  910617 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 18:25:18.382074  910617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 18:25:18.390957  910617 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 18:25:18.391007  910617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 18:25:18.984129  910617 out.go:203]   - Generating certificates and keys ...
	I0329 18:25:22.030092  910617 out.go:203]   - Booting up control plane ...
	I0329 18:25:30.094011  910617 out.go:203]   - Configuring RBAC rules ...
	I0329 18:25:30.508879  910617 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 18:25:30.510684  910617 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0329 18:25:30.510773  910617 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 18:25:30.510834  910617 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0329 18:25:30.514977  910617 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0329 18:25:30.515007  910617 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0329 18:25:30.534557  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 18:25:31.601827  910617 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.067227342s)
	I0329 18:25:31.601903  910617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 18:25:31.602005  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:31.602021  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=custom-weave-20220329180854-564087 minikube.k8s.io/updated_at=2022_03_29T18_25_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:31.611058  910617 ops.go:34] apiserver oom_adj: -16
	I0329 18:25:31.686759  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:32.246368  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:32.746938  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:33.246318  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:33.746874  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:34.246584  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:34.746216  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:35.245990  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:35.746530  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:36.246300  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:36.746856  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:37.246748  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:37.746305  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:38.246957  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:38.746741  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:39.246260  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:39.746723  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:40.246011  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:40.746139  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:41.246889  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:41.746187  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:42.246841  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:42.745914  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:43.246302  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:43.746650  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:44.246195  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:44.746615  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:45.246788  910617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:25:45.333914  910617 kubeadm.go:1020] duration metric: took 13.73197524s to wait for elevateKubeSystemPrivileges.
	I0329 18:25:45.333948  910617 kubeadm.go:393] StartCluster complete in 27.012717696s
	I0329 18:25:45.333976  910617 settings.go:142] acquiring lock: {Name:mkf193dd78851319876bf7c47a47f525125a4fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:45.334082  910617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 18:25:45.335952  910617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mke8ff89e3fadc84c0cca24c5855d2fcb9124f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:25:45.856393  910617 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220329180854-564087" rescaled to 1
	I0329 18:25:45.856469  910617 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:25:45.856505  910617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 18:25:45.857848  910617 out.go:176] * Verifying Kubernetes components...
	I0329 18:25:45.856695  910617 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 18:25:45.857980  910617 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220329180854-564087"
	I0329 18:25:45.858004  910617 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220329180854-564087"
	W0329 18:25:45.858012  910617 addons.go:165] addon storage-provisioner should already be in state true
	I0329 18:25:45.856886  910617 config.go:176] Loaded profile config "custom-weave-20220329180854-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:25:45.858044  910617 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220329180854-564087"
	I0329 18:25:45.858060  910617 host.go:66] Checking if "custom-weave-20220329180854-564087" exists ...
	I0329 18:25:45.858074  910617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220329180854-564087"
	I0329 18:25:45.857912  910617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 18:25:45.858460  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:45.858661  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:45.911024  910617 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 18:25:45.911171  910617 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:25:45.911187  910617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 18:25:45.911249  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:45.946725  910617 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220329180854-564087"
	W0329 18:25:45.946756  910617 addons.go:165] addon default-storageclass should already be in state true
	I0329 18:25:45.946789  910617 host.go:66] Checking if "custom-weave-20220329180854-564087" exists ...
	I0329 18:25:45.947415  910617 cli_runner.go:133] Run: docker container inspect custom-weave-20220329180854-564087 --format={{.State.Status}}
	I0329 18:25:45.956875  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:45.999832  910617 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 18:25:45.999864  910617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 18:25:45.999922  910617 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329180854-564087
	I0329 18:25:46.033587  910617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49731 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329180854-564087/id_rsa Username:docker}
	I0329 18:25:46.047095  910617 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220329180854-564087" to be "Ready" ...
	I0329 18:25:46.047437  910617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 18:25:46.050844  910617 node_ready.go:49] node "custom-weave-20220329180854-564087" has status "Ready":"True"
	I0329 18:25:46.050878  910617 node_ready.go:38] duration metric: took 3.747332ms waiting for node "custom-weave-20220329180854-564087" to be "Ready" ...
	I0329 18:25:46.050888  910617 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 18:25:46.060188  910617 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-bllmz" in "kube-system" namespace to be "Ready" ...
	I0329 18:25:46.254053  910617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:25:46.275679  910617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 18:25:46.587442  910617 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0329 18:25:46.871060  910617 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 18:25:46.871106  910617 addons.go:417] enableAddons completed in 1.014420702s
	I0329 18:25:48.073812  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:50.074282  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:52.576580  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:55.075271  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:25:57.576666  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:00.075205  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:02.574729  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:04.574767  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:06.574848  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:09.074550  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:11.573766  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:13.575212  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:16.075311  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:18.573051  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:20.575189  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:23.075108  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:25.575316  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:28.074887  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:30.075083  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:32.574734  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:35.075091  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:37.574559  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:40.074693  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:42.574020  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:44.575035  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:47.075391  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:49.573331  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:51.574479  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:54.073067  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:56.074802  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:26:58.574871  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:00.575516  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:03.073490  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:05.074280  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:07.074772  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:09.573458  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:12.074371  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:14.074773  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:16.573754  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:18.574523  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:20.574751  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:22.574812  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:25.074071  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:27.074500  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:29.574980  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:32.074103  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:34.074339  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:36.074699  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:38.574163  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:40.574285  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:43.074042  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:45.074126  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:47.074951  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:49.575241  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:52.074091  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:54.573827  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:57.073434  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:27:59.074090  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:01.074609  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:03.573941  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:05.574246  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:08.073842  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:10.074182  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:12.075036  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:14.574381  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:17.073611  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:19.075178  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:21.573926  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:23.575183  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:26.074417  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:28.574083  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:31.074687  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:33.574322  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:35.574778  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:37.575243  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:40.073884  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:42.074490  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:44.574829  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:46.575371  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:49.074580  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:51.573999  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:53.574245  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:55.574746  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:28:58.073693  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:00.073736  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:02.574503  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:04.574649  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:07.074412  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:09.573960  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:11.574060  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:13.574105  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:16.075208  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:18.574033  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:21.073133  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:23.074422  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:25.075500  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:27.574061  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:29.575029  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:32.073801  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:34.574844  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:37.073859  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:39.074987  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:41.574061  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:43.574828  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:46.075036  910617 pod_ready.go:102] pod "coredns-64897985d-bllmz" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:46.080036  910617 pod_ready.go:81] duration metric: took 4m0.019770851s waiting for pod "coredns-64897985d-bllmz" in "kube-system" namespace to be "Ready" ...
	E0329 18:29:46.080063  910617 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 18:29:46.080073  910617 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-sj6vh" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.081985  910617 pod_ready.go:97] error getting pod "coredns-64897985d-sj6vh" in "kube-system" namespace (skipping!): pods "coredns-64897985d-sj6vh" not found
	I0329 18:29:46.082012  910617 pod_ready.go:81] duration metric: took 1.933137ms waiting for pod "coredns-64897985d-sj6vh" in "kube-system" namespace to be "Ready" ...
	E0329 18:29:46.082023  910617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-sj6vh" in "kube-system" namespace (skipping!): pods "coredns-64897985d-sj6vh" not found
	I0329 18:29:46.082031  910617 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.086823  910617 pod_ready.go:92] pod "etcd-custom-weave-20220329180854-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 18:29:46.086850  910617 pod_ready.go:81] duration metric: took 4.809778ms waiting for pod "etcd-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.086862  910617 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.091893  910617 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220329180854-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 18:29:46.091910  910617 pod_ready.go:81] duration metric: took 5.039582ms waiting for pod "kube-apiserver-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.091922  910617 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.271928  910617 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220329180854-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 18:29:46.271953  910617 pod_ready.go:81] duration metric: took 180.023306ms waiting for pod "kube-controller-manager-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.271966  910617 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-jnk29" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.671375  910617 pod_ready.go:92] pod "kube-proxy-jnk29" in "kube-system" namespace has status "Ready":"True"
	I0329 18:29:46.671405  910617 pod_ready.go:81] duration metric: took 399.432183ms waiting for pod "kube-proxy-jnk29" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:46.671418  910617 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:47.072158  910617 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220329180854-564087" in "kube-system" namespace has status "Ready":"True"
	I0329 18:29:47.072183  910617 pod_ready.go:81] duration metric: took 400.756312ms waiting for pod "kube-scheduler-custom-weave-20220329180854-564087" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:47.072196  910617 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-bhgwk" in "kube-system" namespace to be "Ready" ...
	I0329 18:29:49.478452  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:51.478725  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:53.978039  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:56.478579  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:29:58.479259  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:00.978750  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:02.987653  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:05.477821  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:07.978113  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:09.978797  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:11.979822  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:14.479187  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:16.977999  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:18.978316  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:20.978938  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:23.477765  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:25.478448  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:27.478905  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:29.978117  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:31.978235  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:34.477386  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:36.478219  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:38.478399  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:40.479166  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:42.979443  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:45.478252  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:47.479301  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:49.978532  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:52.478260  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:54.979297  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:56.979896  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:30:59.478049  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:01.479040  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:03.978764  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:06.479715  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:08.977613  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:10.978494  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:12.978576  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:14.981838  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:17.477781  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:19.978648  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:21.978762  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:24.478689  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:26.978943  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:28.979266  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:31.477402  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:33.479339  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:35.977773  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:37.979230  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:40.477699  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:42.478126  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:44.478205  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:46.978266  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:48.978568  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:51.478726  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:53.977936  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:56.478129  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:31:58.478485  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:00.479049  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:02.978470  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:05.478192  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:07.479011  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:09.977825  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:11.979000  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:14.478747  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:16.978400  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:19.479466  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:21.480805  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:23.978864  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:26.478006  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:28.978018  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:30.979366  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:32.979620  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:35.480660  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:37.484601  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:39.978053  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:41.978727  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:43.979209  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:46.479343  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:48.979038  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:51.478642  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:53.479354  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:55.980967  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:32:58.479072  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:00.981753  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:03.477898  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:05.478249  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:07.978623  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:10.477949  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:12.977842  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:15.477914  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:17.479122  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:19.978189  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:21.978254  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:23.979024  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:26.477551  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:28.478446  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:30.977682  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:32.977910  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:34.978183  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:37.477125  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:39.478246  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:41.478930  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:43.978174  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:46.479102  910617 pod_ready.go:102] pod "weave-net-bhgwk" in "kube-system" namespace has status "Ready":"False"
	I0329 18:33:47.481961  910617 pod_ready.go:81] duration metric: took 4m0.409751575s waiting for pod "weave-net-bhgwk" in "kube-system" namespace to be "Ready" ...
	E0329 18:33:47.481985  910617 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 18:33:47.481990  910617 pod_ready.go:38] duration metric: took 8m1.431088638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 18:33:47.482015  910617 api_server.go:51] waiting for apiserver process to appear ...
	I0329 18:33:47.484403  910617 out.go:176] 
	W0329 18:33:47.484550  910617 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0329 18:33:47.484647  910617 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0329 18:33:47.484666  910617 out.go:241] * Related issues:
	* Related issues:
	W0329 18:33:47.484717  910617 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0329 18:33:47.484779  910617 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0329 18:33:47.486331  910617 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (522.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (345.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:26:31.061474  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.188678021s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.272716335s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:27:02.271497  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14455878s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13349966s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171838619s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150751737s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151437875s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13399043s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:29:10.441419  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:29:30.084847  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148836937s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:30:15.026604  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.031877  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.042121  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.062393  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.102730  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.183005  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.343521  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:15.664135  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:16.304430  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:17.584811  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:20.145629  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:30:23.058817  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:30:25.266196  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129692967s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:31:17.158097  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 18:31:17.784088  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1461227s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:31:36.948784  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.168713953s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (345.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (339.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.182931373s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:27:12.022304  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135302139s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141109769s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165528101s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150916135s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:28:33.942984  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172457819s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:28:42.756056  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.191719331s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:29:18.010949  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:29:18.427690  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:29:25.127850  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131151605s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:29:46.112234  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.169515231s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:30:35.507236  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157949729s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:30:50.099177  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:30:55.987769  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148306358s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171868498s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (339.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (338.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:33:42.755738  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141207727s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.152022162s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:34:13.132120  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 18:34:18.011397  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:34:18.427436  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129743597s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:34:25.127404  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:34:30.085271  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135914306s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127830282s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:35:15.026250  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135163369s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:35:23.058788  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138617043s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:35:42.710355  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory
E0329 18:35:50.098961  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:36:00.205967  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125188381s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:36:17.158274  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 18:36:20.326195  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.331474  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.341719  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.361981  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.402166  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.482487  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.642863  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:20.963945  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:21.604857  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:22.885034  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:25.446238  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:30.567239  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:36:40.807433  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:36:40.879776  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:40.885138  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:40.895425  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:40.915708  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:40.955995  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:41.036304  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:41.196950  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:41.517514  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:42.158265  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:43.438730  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:45.999335  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:36:46.105635  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132435668s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:36:51.119971  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:37:01.288214  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
E0329 18:37:01.360466  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:37:21.840904  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.234885442s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135892799s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:39:04.170281  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142047147s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (338.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (337.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:37:42.249271  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14438036s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140617743s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132887864s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12380607s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130775136s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136204112s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:39:24.722858  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:39:25.127624  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138965419s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:40:05.802283  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127917973s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:40:15.026290  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329180854-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:40:23.058964  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13470256s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:40:41.472988  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:40:50.098811  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135114373s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:41:48.010479  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125601109s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:42:08.563363  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
E0329 18:42:13.145283  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:42:21.060912  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153470152s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (337.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (355.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151375769s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126939482s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:38:42.756595  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137856556s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119885373s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1388101s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:39:30.085193  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136719824s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145493398s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130813693s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142171434s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 18:41:17.158427  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 18:41:20.325407  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329180853-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:41:40.879652  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135625427s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
E0329 18:42:28.172907  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131461701s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329180853-564087 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.236539737s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (355.89s)

                                                
                                    

Test pass (247/281)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.5/json-events 5.13
11 TestDownloadOnly/v1.23.5/preload-exists 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.07
17 TestDownloadOnly/v1.23.6-rc.0/json-events 5.33
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.34
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 23.43
26 TestBinaryMirror 0.86
27 TestOffline 312.18
29 TestAddons/Setup 124.06
32 TestAddons/parallel/Ingress 25.1
33 TestAddons/parallel/MetricsServer 5.7
34 TestAddons/parallel/HelmTiller 11.29
36 TestAddons/parallel/CSI 57.18
38 TestAddons/serial/GCPAuth 46.4
39 TestAddons/StoppedEnableDisable 11.28
40 TestCertOptions 31.5
41 TestCertExpiration 218.47
42 TestDockerFlags 31.84
43 TestForceSystemdFlag 50.59
44 TestForceSystemdEnv 34.4
45 TestKVMDriverInstallOrUpdate 8.2
49 TestErrorSpam/setup 26.18
50 TestErrorSpam/start 0.89
51 TestErrorSpam/status 1.11
52 TestErrorSpam/pause 1.41
53 TestErrorSpam/unpause 1.49
54 TestErrorSpam/stop 11.06
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 40.73
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 254.12
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.15
65 TestFunctional/serial/CacheCmd/cache/add_remote 7.16
66 TestFunctional/serial/CacheCmd/cache/add_local 2.61
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
70 TestFunctional/serial/CacheCmd/cache/cache_reload 3.09
71 TestFunctional/serial/CacheCmd/cache/delete 0.12
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 574.16
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.03
77 TestFunctional/serial/LogsFileCmd 1.02
79 TestFunctional/parallel/ConfigCmd 0.43
81 TestFunctional/parallel/DryRun 0.55
82 TestFunctional/parallel/InternationalLanguage 0.21
83 TestFunctional/parallel/StatusCmd 1.39
86 TestFunctional/parallel/ServiceCmd 14.63
87 TestFunctional/parallel/ServiceCmdConnect 18.74
88 TestFunctional/parallel/AddonsCmd 0.17
91 TestFunctional/parallel/SSHCmd 0.77
92 TestFunctional/parallel/CpCmd 1.37
93 TestFunctional/parallel/MySQL 25.37
94 TestFunctional/parallel/FileSync 0.44
95 TestFunctional/parallel/CertSync 2.48
99 TestFunctional/parallel/NodeLabels 0.05
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
103 TestFunctional/parallel/DockerEnv/bash 1.49
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
107 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
108 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
109 TestFunctional/parallel/ImageCommands/Setup 1.48
110 TestFunctional/parallel/Version/short 0.06
111 TestFunctional/parallel/Version/components 0.58
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.82
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.75
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
121 TestFunctional/parallel/ProfileCmd/profile_list 0.52
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.75
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.9
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.46
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.38
135 TestFunctional/parallel/MountCmd/any-port 15.41
136 TestFunctional/parallel/MountCmd/specific-port 1.9
137 TestFunctional/delete_addon-resizer_images 0.1
138 TestFunctional/delete_my-image_image 0.03
139 TestFunctional/delete_minikube_cached_images 0.03
142 TestIngressAddonLegacy/StartLegacyK8sCluster 56.4
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.25
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 45.53
149 TestJSONOutput/start/Command 40.68
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.68
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.6
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 10.85
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.28
174 TestKicCustomNetwork/create_custom_network 30.03
175 TestKicCustomNetwork/use_default_bridge_network 28.34
176 TestKicExistingNetwork 28.49
177 TestKicCustomSubnet 28.52
178 TestMainNoArgs 0.06
181 TestMountStart/serial/StartWithMountFirst 5.55
182 TestMountStart/serial/VerifyMountFirst 0.33
183 TestMountStart/serial/StartWithMountSecond 5.75
184 TestMountStart/serial/VerifyMountSecond 0.33
185 TestMountStart/serial/DeleteFirst 1.74
186 TestMountStart/serial/VerifyMountPostDelete 0.33
187 TestMountStart/serial/Stop 1.27
188 TestMountStart/serial/RestartStopped 6.88
189 TestMountStart/serial/VerifyMountPostStop 0.33
192 TestMultiNode/serial/FreshStart2Nodes 86.05
195 TestMultiNode/serial/AddNode 28.13
196 TestMultiNode/serial/ProfileList 0.35
197 TestMultiNode/serial/CopyFile 11.86
198 TestMultiNode/serial/StopNode 2.48
199 TestMultiNode/serial/StartAfterStop 24.62
200 TestMultiNode/serial/RestartKeepsNodes 106.84
201 TestMultiNode/serial/DeleteNode 5.34
202 TestMultiNode/serial/StopMultiNode 21.84
203 TestMultiNode/serial/RestartMultiNode 59.32
204 TestMultiNode/serial/ValidateNameConflict 29.62
209 TestPreload 124.05
211 TestScheduledStopUnix 99.95
212 TestSkaffold 59.93
214 TestInsufficientStorage 14.54
215 TestRunningBinaryUpgrade 75.36
217 TestKubernetesUpgrade 99.92
218 TestMissingContainerUpgrade 98.98
220 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
221 TestStoppedBinaryUpgrade/Setup 1.46
222 TestNoKubernetes/serial/StartWithK8s 47.34
223 TestStoppedBinaryUpgrade/Upgrade 94.21
224 TestNoKubernetes/serial/StartWithStopK8s 19.08
225 TestNoKubernetes/serial/Start 6.37
226 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
227 TestNoKubernetes/serial/ProfileList 1.9
228 TestNoKubernetes/serial/Stop 1.34
229 TestNoKubernetes/serial/StartNoArgs 6.43
230 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
231 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
240 TestPause/serial/Start 45.06
241 TestPause/serial/SecondStartNoReconfiguration 5.3
242 TestPause/serial/Pause 0.67
243 TestPause/serial/VerifyStatus 0.39
244 TestPause/serial/Unpause 0.66
245 TestPause/serial/PauseAgain 0.94
246 TestPause/serial/DeletePaused 2.51
247 TestPause/serial/VerifyDeletedResources 0.64
260 TestStartStop/group/old-k8s-version/serial/FirstStart 319.73
262 TestStartStop/group/no-preload/serial/FirstStart 54.14
264 TestStartStop/group/embed-certs/serial/FirstStart 291.6
265 TestStartStop/group/no-preload/serial/DeployApp 8.34
266 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.65
267 TestStartStop/group/no-preload/serial/Stop 10.87
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
269 TestStartStop/group/no-preload/serial/SecondStart 338.22
271 TestStartStop/group/default-k8s-different-port/serial/FirstStart 289.09
272 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.59
274 TestStartStop/group/old-k8s-version/serial/Stop 10.87
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
276 TestStartStop/group/old-k8s-version/serial/SecondStart 561.9
277 TestStartStop/group/embed-certs/serial/DeployApp 8.4
278 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.65
279 TestStartStop/group/embed-certs/serial/Stop 10.85
280 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
281 TestStartStop/group/embed-certs/serial/SecondStart 571.91
282 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.42
283 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.59
284 TestStartStop/group/default-k8s-different-port/serial/Stop 10.82
285 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
286 TestStartStop/group/default-k8s-different-port/serial/SecondStart 322.41
287 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.02
288 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.18
289 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
290 TestStartStop/group/no-preload/serial/Pause 3.04
292 TestStartStop/group/newest-cni/serial/FirstStart 40.41
293 TestStartStop/group/newest-cni/serial/DeployApp 0
294 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
295 TestStartStop/group/newest-cni/serial/Stop 10.78
296 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
297 TestStartStop/group/newest-cni/serial/SecondStart 20.37
298 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
301 TestStartStop/group/newest-cni/serial/Pause 3.06
302 TestNetworkPlugins/group/auto/Start 41.99
303 TestNetworkPlugins/group/auto/KubeletFlags 0.36
304 TestNetworkPlugins/group/auto/NetCatPod 11.22
306 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 9.01
307 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.07
308 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.38
309 TestStartStop/group/default-k8s-different-port/serial/Pause 3
310 TestNetworkPlugins/group/false/Start 288.24
311 TestNetworkPlugins/group/cilium/Start 95.67
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
315 TestStartStop/group/old-k8s-version/serial/Pause 3.02
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
320 TestStartStop/group/embed-certs/serial/Pause 3.12
322 TestNetworkPlugins/group/cilium/ControllerPod 5.02
323 TestNetworkPlugins/group/cilium/KubeletFlags 0.41
324 TestNetworkPlugins/group/cilium/NetCatPod 12.98
325 TestNetworkPlugins/group/cilium/DNS 0.17
326 TestNetworkPlugins/group/cilium/Localhost 0.17
327 TestNetworkPlugins/group/cilium/HairPin 0.16
328 TestNetworkPlugins/group/enable-default-cni/Start 42.78
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
332 TestNetworkPlugins/group/false/KubeletFlags 0.41
333 TestNetworkPlugins/group/false/NetCatPod 11.36
335 TestNetworkPlugins/group/kindnet/Start 59.91
336 TestNetworkPlugins/group/bridge/Start 293.26
337 TestNetworkPlugins/group/kubenet/Start 289.93
338 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
343 TestNetworkPlugins/group/bridge/NetCatPod 10.3
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.35
346 TestNetworkPlugins/group/kubenet/NetCatPod 12.23
x
+
TestDownloadOnly/v1.16.0/json-events (4.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.84963757s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220329171133-564087
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220329171133-564087: exit status 85 (73.905436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:11:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171133-564087"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (5.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.12590049s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (5.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220329171133-564087
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220329171133-564087: exit status 85 (73.40249ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:11:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:11:38.365738  564244 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:11:38.365875  564244 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:11:38.365886  564244 out.go:310] Setting ErrFile to fd 2...
	I0329 17:11:38.365892  564244 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:11:38.366018  564244 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	W0329 17:11:38.366147  564244 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: no such file or directory
	I0329 17:11:38.366277  564244 out.go:304] Setting JSON to true
	I0329 17:11:38.367223  564244 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6852,"bootTime":1648567047,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:11:38.367300  564244 start.go:124] virtualization: kvm guest
	I0329 17:11:38.369989  564244 notify.go:193] Checking for updates...
	I0329 17:11:38.372303  564244 config.go:176] Loaded profile config "download-only-20220329171133-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0329 17:11:38.372396  564244 start.go:708] api.Load failed for download-only-20220329171133-564087: filestore "download-only-20220329171133-564087": Docker machine "download-only-20220329171133-564087" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:11:38.372459  564244 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 17:11:38.372503  564244 start.go:708] api.Load failed for download-only-20220329171133-564087: filestore "download-only-20220329171133-564087": Docker machine "download-only-20220329171133-564087" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:11:38.409992  564244 docker.go:137] docker version: linux-20.10.14
	I0329 17:11:38.410119  564244 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:11:38.495078  564244 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:11:38.437365076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:11:38.495196  564244 docker.go:254] overlay module found
	I0329 17:11:38.497312  564244 start.go:283] selected driver: docker
	I0329 17:11:38.497326  564244 start.go:800] validating driver "docker" against &{Name:download-only-20220329171133-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220329171133-564087 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:11:38.497616  564244 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:11:38.583058  564244 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:11:38.524649411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:11:38.583928  564244 cni.go:93] Creating CNI manager for ""
	I0329 17:11:38.583958  564244 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:11:38.583976  564244 start_flags.go:306] config:
	{Name:download-only-20220329171133-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329171133-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:11:38.586444  564244 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:11:38.587897  564244 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:11:38.587990  564244 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:11:38.631976  564244 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:11:38.632011  564244 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:11:38.729788  564244 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:11:38.729819  564244 cache.go:57] Caching tarball of preloaded images
	I0329 17:11:38.730137  564244 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:11:38.732338  564244 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:38.875711  564244 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:b4b3d1771f6a934557953d7b31a587d4 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:11:41.698756  564244 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:41.698853  564244 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:42.676543  564244 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 17:11:42.676691  564244 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/download-only-20220329171133-564087/config.json ...
	I0329 17:11:42.676891  564244 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:11:42.677128  564244 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/linux/amd64/v1.23.5/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171133-564087"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (5.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220329171133-564087 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.33057551s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (5.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220329171133-564087
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220329171133-564087: exit status 85 (70.491636ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:11:43
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:11:43.566028  564393 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:11:43.566158  564393 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:11:43.566167  564393 out.go:310] Setting ErrFile to fd 2...
	I0329 17:11:43.566171  564393 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:11:43.566274  564393 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	W0329 17:11:43.566391  564393 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: no such file or directory
	I0329 17:11:43.566507  564393 out.go:304] Setting JSON to true
	I0329 17:11:43.567382  564393 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6857,"bootTime":1648567047,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:11:43.567459  564393 start.go:124] virtualization: kvm guest
	I0329 17:11:43.569925  564393 notify.go:193] Checking for updates...
	I0329 17:11:43.572171  564393 config.go:176] Loaded profile config "download-only-20220329171133-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0329 17:11:43.572223  564393 start.go:708] api.Load failed for download-only-20220329171133-564087: filestore "download-only-20220329171133-564087": Docker machine "download-only-20220329171133-564087" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:11:43.572280  564393 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 17:11:43.572306  564393 start.go:708] api.Load failed for download-only-20220329171133-564087: filestore "download-only-20220329171133-564087": Docker machine "download-only-20220329171133-564087" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:11:43.610787  564393 docker.go:137] docker version: linux-20.10.14
	I0329 17:11:43.610879  564393 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:11:43.697107  564393 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:11:43.637730299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:11:43.697226  564393 docker.go:254] overlay module found
	I0329 17:11:43.699370  564393 start.go:283] selected driver: docker
	I0329 17:11:43.699385  564393 start.go:800] validating driver "docker" against &{Name:download-only-20220329171133-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329171133-564087 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:11:43.699622  564393 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:11:43.784759  564393 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-29 17:11:43.726131446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:11:43.785364  564393 cni.go:93] Creating CNI manager for ""
	I0329 17:11:43.785388  564393 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:11:43.785397  564393 start_flags.go:306] config:
	{Name:download-only-20220329171133-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220329171133-564087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:11:43.787538  564393 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:11:43.788825  564393 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 17:11:43.788920  564393 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:11:43.832830  564393 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 17:11:43.832858  564393 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 17:11:43.931629  564393 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0329 17:11:43.931661  564393 cache.go:57] Caching tarball of preloaded images
	I0329 17:11:43.932004  564393 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 17:11:43.934093  564393 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:44.075596  564393 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d90e40f602d4362984725b3ec643bc0d -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0329 17:11:47.011833  564393 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:47.011933  564393 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:11:48.057125  564393 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on docker
	I0329 17:11:48.057308  564393 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/download-only-20220329171133-564087/config.json ...
	I0329 17:11:48.057517  564393 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 17:11:48.057817  564393 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.6-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/linux/amd64/v1.23.6-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171133-564087"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220329171133-564087
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (23.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220329171149-564087 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220329171149-564087 --force --alsologtostderr --driver=docker  --container-runtime=docker: (21.691592575s)
helpers_test.go:176: Cleaning up "download-docker-20220329171149-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220329171149-564087
--- PASS: TestDownloadOnlyKic (23.43s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220329171213-564087 --alsologtostderr --binary-mirror http://127.0.0.1:46523 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-20220329171213-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220329171213-564087
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (312.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220329180452-564087 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220329180452-564087 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (5m9.488876826s)
helpers_test.go:176: Cleaning up "offline-docker-20220329180452-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220329180452-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220329180452-564087: (2.688526057s)
--- PASS: TestOffline (312.18s)

                                                
                                    
x
+
TestAddons/Setup (124.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220329171213-564087 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220329171213-564087 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m4.061142595s)
--- PASS: TestAddons/Setup (124.06s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220329171213-564087 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220329171213-564087 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20220329171213-564087 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (892.398821ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.106.215.24:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220329171213-564087 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220329171213-564087 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [00638447-f61a-4f45-80c9-e730776119b8] Pending
helpers_test.go:343: "nginx" [00638447-f61a-4f45-80c9-e730776119b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [00638447-f61a-4f45-80c9-e730776119b8] Running
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0060406s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220329171213-564087 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable ingress-dns --alsologtostderr -v=1: (1.491141552s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable ingress --alsologtostderr -v=1: (7.541295098s)
--- PASS: TestAddons/parallel/Ingress (25.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 10.108717ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-bd6f4dd56-p8wkj" [1e19f7ab-ea18-4744-b113-621f8315913e] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009344958s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220329171213-564087 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.29s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 10.098948ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-rqh52" [aa5d19e1-1c65-4e83-b011-cd5b8bc775cc] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009332534s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220329171213-564087 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220329171213-564087 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.936229872s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 10.726763ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329171213-564087 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [9d1a3471-a05c-462e-b89f-4f69e856ac00] Pending
helpers_test.go:343: "task-pv-pod" [9d1a3471-a05c-462e-b89f-4f69e856ac00] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [9d1a3471-a05c-462e-b89f-4f69e856ac00] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 27.006089517s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329171213-564087 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329171213-564087 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329171213-564087 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [925ce38b-9526-4479-9b81-3d99045de650] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [925ce38b-9526-4479-9b81-3d99045de650] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [925ce38b-9526-4479-9b81-3d99045de650] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 18.007124588s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220329171213-564087 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.920653879s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable volumesnapshots --alsologtostderr -v=1
2022/03/29 17:15:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:15:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:15:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:28 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:15:28 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:28 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:15:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:15:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:15:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:15:43 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:44 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:15:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:15:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:45 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:15:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:15:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:15:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:15:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:00 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:16:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:16:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:16:03 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:03 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:16:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:16:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:16 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:16:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:16:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:16:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:16:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:23 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:16:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:32 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:16:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:16:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:16:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:16:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:39 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:16:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:51 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:16:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:16:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:16:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:16:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:16:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:17:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:14 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:17:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:14 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:17:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:17:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:17:21 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:21 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:17:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:41 [DEBUG] GET http://192.168.49.2:5000
2022/03/29 17:17:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/03/29 17:17:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/03/29 17:17:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/03/29 17:17:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/03/29 17:17:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/03/29 17:17:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
--- PASS: TestAddons/parallel/CSI (57.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (46.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220329171213-564087 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [0db0fb65-b292-4fa1-8fb2-61e694c31f14] Pending
helpers_test.go:343: "busybox" [0db0fb65-b292-4fa1-8fb2-61e694c31f14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [0db0fb65-b292-4fa1-8fb2-61e694c31f14] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.011281812s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220329171213-564087 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220329171213-564087 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 addons disable gcp-auth --alsologtostderr -v=1: (6.074343537s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220329171213-564087 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220329171213-564087 addons enable gcp-auth: (2.949987525s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220329171213-564087 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-f2wnq" [851698ed-6dc6-4efd-ba0d-88bde8a1eb49] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-f2wnq" [851698ed-6dc6-4efd-ba0d-88bde8a1eb49] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 17.006080075s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220329171213-564087 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-bl2km" [3ebd617a-59c4-4a8a-b4a2-f39a795df97b] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-bl2km" [3ebd617a-59c4-4a8a-b4a2-f39a795df97b] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.006146352s
--- PASS: TestAddons/serial/GCPAuth (46.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220329171213-564087
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220329171213-564087: (11.08753627s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220329171213-564087
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220329171213-564087
--- PASS: TestAddons/StoppedEnableDisable (11.28s)

                                                
                                    
x
+
TestCertOptions (31.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220329180818-564087 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220329180818-564087 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.940878905s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220329180818-564087 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220329180818-564087 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220329180818-564087 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220329180818-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220329180818-564087

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220329180818-564087: (2.768948676s)
--- PASS: TestCertOptions (31.50s)

                                                
                                    
x
+
TestCertExpiration (218.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220329180721-564087 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220329180721-564087 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (31.456556686s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220329180721-564087 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220329180721-564087 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.321426052s)
helpers_test.go:176: Cleaning up "cert-expiration-20220329180721-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220329180721-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220329180721-564087: (2.691240789s)
--- PASS: TestCertExpiration (218.47s)

                                                
                                    
x
+
TestDockerFlags (31.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220329180746-564087 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220329180746-564087 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.350319408s)
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220329180746-564087 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220329180746-564087 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220329180746-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220329180746-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220329180746-564087: (2.643624375s)
--- PASS: TestDockerFlags (31.84s)

                                                
                                    
x
+
TestForceSystemdFlag (50.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220329180452-564087 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220329180452-564087 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.382950057s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220329180452-564087 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:176: Cleaning up "force-systemd-flag-20220329180452-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220329180452-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220329180452-564087: (2.686713434s)
--- PASS: TestForceSystemdFlag (50.59s)

                                                
                                    
x
+
TestForceSystemdEnv (34.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220329180854-564087 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220329180854-564087 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.349409113s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220329180854-564087 ssh "docker info --format {{.CgroupDriver}}"
E0329 18:09:25.766021  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
helpers_test.go:176: Cleaning up "force-systemd-env-20220329180854-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220329180854-564087
E0329 18:09:26.406468  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:27.686619  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220329180854-564087: (2.595383227s)
--- PASS: TestForceSystemdEnv (34.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.20s)

                                                
                                    
x
+
TestErrorSpam/setup (26.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220329171859-564087 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220329171859-564087 --driver=docker  --container-runtime=docker
E0329 17:19:18.011084  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.016893  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.027153  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.047503  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.087781  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.168071  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.328474  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:18.649117  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:19.289451  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:20.570221  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:19:23.131999  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220329171859-564087 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220329171859-564087 --driver=docker  --container-runtime=docker: (26.178248348s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (26.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 start --dry-run
--- PASS: TestErrorSpam/start (0.89s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 pause
E0329 17:19:28.252137  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (11.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 stop
E0329 17:19:38.493075  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 stop: (10.800718567s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220329171859-564087 --log_dir /tmp/nospam-20220329171859-564087 stop
--- PASS: TestErrorSpam/stop (11.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1796: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/564087/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2178: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0329 17:19:58.973681  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
functional_test.go:2178: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220329171943-564087 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.726235008s)
--- PASS: TestFunctional/serial/StartWithProxy (40.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (254.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:656: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --alsologtostderr -v=8
E0329 17:20:39.934216  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:22:01.855057  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:24:18.011284  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
functional_test.go:656: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220329171943-564087 --alsologtostderr -v=8: (4m14.120665848s)
functional_test.go:660: soft start took 4m14.121363799s for "functional-20220329171943-564087" cluster.
--- PASS: TestFunctional/serial/SoftStart (254.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:678: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:693: (dbg) Run:  kubectl --context functional-20220329171943-564087 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add k8s.gcr.io/pause:3.1
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add k8s.gcr.io/pause:3.3
functional_test.go:1046: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add k8s.gcr.io/pause:3.3: (2.839814218s)
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add k8s.gcr.io/pause:latest
E0329 17:24:45.697025  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
functional_test.go:1046: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add k8s.gcr.io/pause:latest: (3.80191328s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220329171943-564087 /tmp/functional-20220329171943-5640873834762178
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add minikube-local-cache-test:functional-20220329171943-564087
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 cache add minikube-local-cache-test:functional-20220329171943-564087: (2.319907661s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache delete minikube-local-cache-test:functional-20220329171943-564087
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220329171943-564087
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (366.159158ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 cache reload: (1.990144688s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:713: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 kubectl -- --context functional-20220329171943-564087 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:738: (dbg) Run:  out/kubectl --context functional-20220329171943-564087 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (574.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:754: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0329 17:29:18.011963  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:34:18.011452  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
functional_test.go:754: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220329171943-564087 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (9m34.162130457s)
functional_test.go:758: restart took 9m34.162289486s for "functional-20220329171943-564087" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (574.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:807: (dbg) Run:  kubectl --context functional-20220329171943-564087 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:822: etcd phase: Running
functional_test.go:832: etcd status: Ready
functional_test.go:822: kube-apiserver phase: Running
functional_test.go:832: kube-apiserver status: Ready
functional_test.go:822: kube-controller-manager phase: Running
functional_test.go:832: kube-controller-manager status: Ready
functional_test.go:822: kube-scheduler phase: Running
functional_test.go:832: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 logs: (1.033429916s)
--- PASS: TestFunctional/serial/LogsCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 logs --file /tmp/functional-20220329171943-5640873790624839/logs.txt
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 logs --file /tmp/functional-20220329171943-5640873790624839/logs.txt: (1.020743525s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 config get cpus: exit status 14 (65.15172ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 config get cpus: exit status 14 (71.001279ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:971: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220329171943-564087 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (226.823512ms)

                                                
                                                
-- stdout --
	* [functional-20220329171943-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:34:57.557531  611002 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:34:57.557643  611002 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.557653  611002 out.go:310] Setting ErrFile to fd 2...
	I0329 17:34:57.557659  611002 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:57.557825  611002 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:34:57.558087  611002 out.go:304] Setting JSON to false
	I0329 17:34:57.559168  611002 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8251,"bootTime":1648567047,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:34:57.559249  611002 start.go:124] virtualization: kvm guest
	I0329 17:34:57.562248  611002 out.go:176] * [functional-20220329171943-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0329 17:34:57.563619  611002 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:57.564898  611002 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:57.566204  611002 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:34:57.567461  611002 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:34:57.568788  611002 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:34:57.569381  611002 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:57.569931  611002 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:57.615198  611002 docker.go:137] docker version: linux-20.10.14
	I0329 17:34:57.615357  611002 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:57.715245  611002 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:57.644094657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:57.715351  611002 docker.go:254] overlay module found
	I0329 17:34:57.717384  611002 out.go:176] * Using the docker driver based on existing profile
	I0329 17:34:57.717418  611002 start.go:283] selected driver: docker
	I0329 17:34:57.717434  611002 start.go:800] validating driver "docker" against &{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:57.717556  611002 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 17:34:57.717605  611002 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 17:34:57.717625  611002 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0329 17:34:57.719666  611002 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 17:34:57.722196  611002 out.go:176] 
	W0329 17:34:57.722300  611002 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0329 17:34:57.723422  611002 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220329171943-564087 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220329171943-564087 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (213.005083ms)

                                                
                                                
-- stdout --
	* [functional-20220329171943-564087] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:34:50.860959  609449 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:34:50.861086  609449 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:50.861094  609449 out.go:310] Setting ErrFile to fd 2...
	I0329 17:34:50.861100  609449 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:50.861250  609449 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:34:50.861504  609449 out.go:304] Setting JSON to false
	I0329 17:34:50.862523  609449 start.go:114] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8244,"bootTime":1648567047,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0329 17:34:50.862600  609449 start.go:124] virtualization: kvm guest
	I0329 17:34:50.865030  609449 out.go:176] * [functional-20220329171943-564087] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0329 17:34:50.866722  609449 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:50.868023  609449 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:50.869312  609449 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 17:34:50.870607  609449 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 17:34:50.872590  609449 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0329 17:34:50.873009  609449 config.go:176] Loaded profile config "functional-20220329171943-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:50.873453  609449 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:50.914089  609449 docker.go:137] docker version: linux-20.10.14
	I0329 17:34:50.914206  609449 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:51.004837  609449 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:74 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-03-29 17:34:50.943015771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:51.004943  609449 docker.go:254] overlay module found
	I0329 17:34:51.007068  609449 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0329 17:34:51.007096  609449 start.go:283] selected driver: docker
	I0329 17:34:51.007102  609449 start.go:800] validating driver "docker" against &{Name:functional-20220329171943-564087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329171943-564087 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:51.007223  609449 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0329 17:34:51.007259  609449 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0329 17:34:51.007275  609449 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0329 17:34:51.008734  609449 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0329 17:34:51.010706  609449 out.go:176] 
	W0329 17:34:51.010796  609449 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0329 17:34:51.012063  609449 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 status
functional_test.go:857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) Run:  kubectl --context functional-20220329171943-564087 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1449: (dbg) Run:  kubectl --context functional-20220329171943-564087 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-5pkpn" [91048f0a-547a-46d1-8ff1-589d0bc25800] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-5pkpn" [91048f0a-547a-46d1-8ff1-589d0bc25800] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-5pkpn" [91048f0a-547a-46d1-8ff1-589d0bc25800] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.07814278s
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1473: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1486: found endpoint: https://192.168.49.2:31390
functional_test.go:1501: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 service hello-node --url --format={{.IP}}
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 service hello-node --url
functional_test.go:1521: found endpoint for hello-node: http://192.168.49.2:31390
--- PASS: TestFunctional/parallel/ServiceCmd (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) Run:  kubectl --context functional-20220329171943-564087 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1575: (dbg) Run:  kubectl --context functional-20220329171943-564087 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:343: "hello-node-connect-74cf8bc446-j6cdn" [53789eb7-3f0f-4089-92c2-7821278b9301] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:343: "hello-node-connect-74cf8bc446-j6cdn" [53789eb7-3f0f-4089-92c2-7821278b9301] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.006486957s
functional_test.go:1589: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 service hello-node-connect --url
functional_test.go:1595: found endpoint for hello-node-connect: http://192.168.49.2:31191
functional_test.go:1615: http://192.168.49.2:31191: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-j6cdn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31191
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1630: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 addons list
functional_test.go:1642: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1665: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1682: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh -n functional-20220329171943-564087 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 cp functional-20220329171943-564087:/home/docker/cp-test.txt /tmp/mk_test2393693594/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh -n functional-20220329171943-564087 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-20220329171943-564087 replace --force -f testdata/mysql.yaml
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-hmlz9" [34605e6c-442f-4bff-97fe-0a6f9026b091] Pending
helpers_test.go:343: "mysql-b87c45988-hmlz9" [34605e6c-442f-4bff-97fe-0a6f9026b091] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-hmlz9" [34605e6c-442f-4bff-97fe-0a6f9026b091] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.006467688s
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;": exit status 1 (141.670539ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;": exit status 1 (155.85078ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;": exit status 1 (132.129272ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329171943-564087 exec mysql-b87c45988-hmlz9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1870: Checking for existence of /etc/test/nested/copy/564087/hosts within VM
functional_test.go:1872: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /etc/test/nested/copy/564087/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1877: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/564087.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /etc/ssl/certs/564087.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /usr/share/ca-certificates/564087.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /usr/share/ca-certificates/564087.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1914: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /etc/ssl/certs/5640872.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /etc/ssl/certs/5640872.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /usr/share/ca-certificates/5640872.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /usr/share/ca-certificates/5640872.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20220329171943-564087 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo systemctl is-active crio": exit status 1 (380.930499ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:496: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220329171943-564087 docker-env) && out/minikube-linux-amd64 status -p functional-20220329171943-564087"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:519: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220329171943-564087 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220329171943-564087
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.3                              | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                              | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-20220329171943-564087 | 937dd178286c9 | 1.24MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                          | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                          | 884d49d6d8c9f | 53.5MB |
| docker.io/kubernetesui/dashboard            | v2.3.1                           | e1482a24335a6 | 220MB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                           | 7801cfc6d5c07 | 34.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-20220329171943-564087 | 17f457dcfd9ee | 30B    |
| k8s.gcr.io/etcd                             | 3.5.1-0                          | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                              | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.1                              | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine                           | 53722defe6278 | 23.4MB |
| gcr.io/google-containers/addon-resizer      | functional-20220329171943-564087 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | 56cc512116c8f | 4.4MB  |
| docker.io/library/mysql                     | 5.7                              | 05311a87aeb4d | 450MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.5                          | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                          | b0c9e5e4dbb14 | 125MB  |
| gcr.io/k8s-minikube/busybox                 | latest                           | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | latest                           | 350b164e7ae1d | 240kB  |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format json:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220329171943-564087"],"size":"32900000"},{"id":"17f457dcfd9ee68ca414c6aa2d8b8069d7ad30fd8f3c9e67994fe650840a9615","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220329171943-564087"],"size":"30"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"3fc1d62d65872296462b198ab7
842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"
repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"937dd178286c9b73f833221449fb8d197909af01697c664e5e31e369fd1081e3","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220329171943-564087"],"size":"1240000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"]
,"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"450000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls --format yaml:
- id: 17f457dcfd9ee68ca414c6aa2d8b8069d7ad30fd8f3c9e67994fe650840a9615
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220329171943-564087
size: "30"
- id: 53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: 05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "450000000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh pgrep buildkitd: exit status 1 (338.87469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image build -t localhost/my-image:functional-20220329171943-564087 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image build -t localhost/my-image:functional-20220329171943-564087 testdata/build: (2.355602183s)
functional_test.go:317: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220329171943-564087 image build -t localhost/my-image:functional-20220329171943-564087 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 5b086069fc96
Removing intermediate container 5b086069fc96
---> 4633b6c46148
Step 3/3 : ADD content.txt /
---> 937dd178286c
Successfully built 937dd178286c
Successfully tagged localhost/my-image:functional-20220329171943-564087
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.440402664s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2200: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1276: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087: (3.503192691s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220329171943-564087 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220329171943-564087 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [be204e85-3df8-4143-a177-fd50277b942d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [be204e85-3df8-4143-a177-fd50277b942d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.015626786s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1316: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1321: Took "450.061465ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1335: Took "73.972918ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1367: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1372: Took "403.230289ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1380: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1385: Took "72.458746ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087: (2.503429455s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.541471451s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087: (4.021238995s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220329171943-564087 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.99.176.178 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220329171943-564087 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image save gcr.io/google-containers/addon-resizer:functional-20220329171943-564087 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image rm gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.217635098s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-20220329171943-564087 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329171943-564087: (2.311222089s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220329171943-564087 /tmp/mounttest1490329758:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1648575291016083056" to /tmp/mounttest1490329758/created-by-test
functional_test_mount_test.go:110: wrote "test-1648575291016083056" to /tmp/mounttest1490329758/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1648575291016083056" to /tmp/mounttest1490329758/test-1648575291016083056
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.500279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 29 17:34 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 29 17:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 29 17:34 test-1648575291016083056
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh cat /mount-9p/test-1648575291016083056

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220329171943-564087 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [f35d19da-8074-48f0-913c-a052f838e5ea] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [f35d19da-8074-48f0-913c-a052f838e5ea] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [f35d19da-8074-48f0-913c-a052f838e5ea] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.007388573s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220329171943-564087 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220329171943-564087 /tmp/mounttest1490329758:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220329171943-564087 /tmp/mounttest532587473:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.031942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220329171943-564087 /tmp/mounttest532587473:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh "sudo umount -f /mount-9p": exit status 1 (343.161895ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220329171943-564087 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220329171943-564087 /tmp/mounttest532587473:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220329171943-564087
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220329171943-564087
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220329171943-564087
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (56.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220329174003-564087 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220329174003-564087 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (56.397136221s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (56.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons enable ingress --alsologtostderr -v=5: (17.252315994s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329174003-564087 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220329174003-564087 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.879046706s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329174003-564087 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329174003-564087 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [e1ae0205-973b-46ed-951d-844233a39090] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [e1ae0205-973b-46ed-951d-844233a39090] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.006061628s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329174003-564087 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons disable ingress-dns --alsologtostderr -v=1: (7.268915817s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220329174003-564087 addons disable ingress --alsologtostderr -v=1: (7.261571485s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220329174205-564087 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220329174205-564087 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.67755143s)
--- PASS: TestJSONOutput/start/Command (40.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220329174205-564087 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220329174205-564087 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220329174205-564087 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220329174205-564087 --output=json --user=testUser: (10.847773546s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220329174300-564087 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220329174300-564087 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.209388ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d8ed1e4-8021-431e-a47b-d4d69c73fa44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220329174300-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e20ff0e-a9de-4cba-8c4c-31f360764459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"9684b7ce-fdd3-4fb9-88a0-3a5093015af8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ad9c9f2-a61f-4ba0-99c2-c40342a9a63f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig"}}
	{"specversion":"1.0","id":"41a341d1-f856-4c8f-acf9-50abf08fea39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube"}}
	{"specversion":"1.0","id":"3d0cd94d-2def-4576-b310-5eca17f665e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"31c4d4f4-a75c-4c37-9257-4eda69a69b60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220329174300-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220329174300-564087
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220329174300-564087 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220329174300-564087 --network=: (27.668611206s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329174300-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220329174300-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220329174300-564087: (2.330622079s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220329174330-564087 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220329174330-564087 --network=bridge: (26.175649443s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329174330-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220329174330-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220329174330-564087: (2.137178401s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.34s)

                                                
                                    
x
+
TestKicExistingNetwork (28.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220329174358-564087 --network=existing-network
E0329 17:44:18.010783  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220329174358-564087 --network=existing-network: (25.950255888s)
helpers_test.go:176: Cleaning up "existing-network-20220329174358-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220329174358-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220329174358-564087: (2.322810718s)
--- PASS: TestKicExistingNetwork (28.49s)

                                                
                                    
x
+
TestKicCustomSubnet (28.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220329174427-564087 --subnet=192.168.60.0/24
E0329 17:44:30.085751  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.091048  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.101378  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.121723  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.162104  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.242495  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.402919  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:30.723587  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:31.364530  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:32.645016  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:35.207001  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:40.327158  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:44:50.568361  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
kic_custom_network_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220329174427-564087 --subnet=192.168.60.0/24: (26.142090542s)
kic_custom_network_test.go:134: (dbg) Run:  docker network inspect custom-subnet-20220329174427-564087 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-20220329174427-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220329174427-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220329174427-564087: (2.343337021s)
--- PASS: TestKicCustomSubnet (28.52s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220329174455-564087 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220329174455-564087 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.55447151s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220329174455-564087 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220329174455-564087 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220329174455-564087 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.748781998s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220329174455-564087 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220329174455-564087 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220329174455-564087 --alsologtostderr -v=5: (1.738650361s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220329174455-564087 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220329174455-564087
E0329 17:45:11.049396  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220329174455-564087: (1.272339673s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220329174455-564087
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220329174455-564087: (5.883459629s)
--- PASS: TestMountStart/serial/RestartStopped (6.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220329174455-564087 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0329 17:45:52.009662  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 17:46:17.158275  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.163548  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.173790  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.194082  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.234385  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.314746  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.474970  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:17.796088  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:18.436875  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:19.717687  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:22.279470  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:27.400345  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 17:46:37.640524  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.472470091s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220329174520-564087 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220329174520-564087 -v 3 --alsologtostderr: (27.373544604s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp testdata/cp-test.txt multinode-20220329174520-564087:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087:/home/docker/cp-test.txt /tmp/mk_cp_test72328866/cp-test_multinode-20220329174520-564087.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087:/home/docker/cp-test.txt multinode-20220329174520-564087-m02:/home/docker/cp-test_multinode-20220329174520-564087_multinode-20220329174520-564087-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087_multinode-20220329174520-564087-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087:/home/docker/cp-test.txt multinode-20220329174520-564087-m03:/home/docker/cp-test_multinode-20220329174520-564087_multinode-20220329174520-564087-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087_multinode-20220329174520-564087-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp testdata/cp-test.txt multinode-20220329174520-564087-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m02:/home/docker/cp-test.txt /tmp/mk_cp_test72328866/cp-test_multinode-20220329174520-564087-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m02:/home/docker/cp-test.txt multinode-20220329174520-564087:/home/docker/cp-test_multinode-20220329174520-564087-m02_multinode-20220329174520-564087.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087-m02_multinode-20220329174520-564087.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m02:/home/docker/cp-test.txt multinode-20220329174520-564087-m03:/home/docker/cp-test_multinode-20220329174520-564087-m02_multinode-20220329174520-564087-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087-m02_multinode-20220329174520-564087-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp testdata/cp-test.txt multinode-20220329174520-564087-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m03:/home/docker/cp-test.txt /tmp/mk_cp_test72328866/cp-test_multinode-20220329174520-564087-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m03:/home/docker/cp-test.txt multinode-20220329174520-564087:/home/docker/cp-test_multinode-20220329174520-564087-m03_multinode-20220329174520-564087.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087-m03_multinode-20220329174520-564087.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 cp multinode-20220329174520-564087-m03:/home/docker/cp-test.txt multinode-20220329174520-564087-m02:/home/docker/cp-test_multinode-20220329174520-564087-m03_multinode-20220329174520-564087-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 ssh -n multinode-20220329174520-564087-m02 "sudo cat /home/docker/cp-test_multinode-20220329174520-564087-m03_multinode-20220329174520-564087-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 node stop m03: (1.284117061s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220329174520-564087 status: exit status 7 (595.498516ms)

                                                
                                                
-- stdout --
	multinode-20220329174520-564087
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329174520-564087-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329174520-564087-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr: exit status 7 (597.035697ms)

                                                
                                                
-- stdout --
	multinode-20220329174520-564087
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329174520-564087-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329174520-564087-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:55:38.763657  673200 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:55:38.763774  673200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:55:38.763785  673200 out.go:310] Setting ErrFile to fd 2...
	I0329 17:55:38.763790  673200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:55:38.763922  673200 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:55:38.764100  673200 out.go:304] Setting JSON to false
	I0329 17:55:38.764127  673200 mustload.go:65] Loading cluster: multinode-20220329174520-564087
	I0329 17:55:38.764454  673200 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:55:38.764479  673200 status.go:253] checking status of multinode-20220329174520-564087 ...
	I0329 17:55:38.764902  673200 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:55:38.797587  673200 status.go:328] multinode-20220329174520-564087 host status = "Running" (err=<nil>)
	I0329 17:55:38.797617  673200 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:55:38.797896  673200 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087
	I0329 17:55:38.830499  673200 host.go:66] Checking if "multinode-20220329174520-564087" exists ...
	I0329 17:55:38.830922  673200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:55:38.830975  673200 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087
	I0329 17:55:38.864128  673200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49514 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087/id_rsa Username:docker}
	I0329 17:55:38.945556  673200 ssh_runner.go:195] Run: systemctl --version
	I0329 17:55:38.949042  673200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:55:38.957906  673200 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:55:39.047341  673200 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:73 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 17:55:38.986659845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:55:39.047874  673200 kubeconfig.go:92] found "multinode-20220329174520-564087" server: "https://192.168.49.2:8443"
	I0329 17:55:39.047908  673200 api_server.go:165] Checking apiserver status ...
	I0329 17:55:39.047937  673200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 17:55:39.058924  673200 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1711/cgroup
	I0329 17:55:39.065987  673200 api_server.go:181] apiserver freezer: "3:freezer:/docker/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/kubepods/burstable/pod27de21fd79a687dd5ac855c0b6b9898c/b7d139996016acfa3cf2c01779f6b962dbc53eb2eb0b7567726393f95167ae70"
	I0329 17:55:39.066043  673200 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/09d1f85080aa1e240db2a9ba79107eda4a95dab9132ae3a69c1464363390df4e/kubepods/burstable/pod27de21fd79a687dd5ac855c0b6b9898c/b7d139996016acfa3cf2c01779f6b962dbc53eb2eb0b7567726393f95167ae70/freezer.state
	I0329 17:55:39.072231  673200 api_server.go:203] freezer state: "THAWED"
	I0329 17:55:39.072264  673200 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0329 17:55:39.076571  673200 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0329 17:55:39.076593  673200 status.go:419] multinode-20220329174520-564087 apiserver status = Running (err=<nil>)
	I0329 17:55:39.076604  673200 status.go:255] multinode-20220329174520-564087 status: &{Name:multinode-20220329174520-564087 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 17:55:39.076636  673200 status.go:253] checking status of multinode-20220329174520-564087-m02 ...
	I0329 17:55:39.076883  673200 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:55:39.108318  673200 status.go:328] multinode-20220329174520-564087-m02 host status = "Running" (err=<nil>)
	I0329 17:55:39.108344  673200 host.go:66] Checking if "multinode-20220329174520-564087-m02" exists ...
	I0329 17:55:39.108582  673200 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329174520-564087-m02
	I0329 17:55:39.140897  673200 host.go:66] Checking if "multinode-20220329174520-564087-m02" exists ...
	I0329 17:55:39.141213  673200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 17:55:39.141252  673200 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329174520-564087-m02
	I0329 17:55:39.172676  673200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49519 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329174520-564087-m02/id_rsa Username:docker}
	I0329 17:55:39.257608  673200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 17:55:39.266636  673200 status.go:255] multinode-20220329174520-564087-m02 status: &{Name:multinode-20220329174520-564087-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0329 17:55:39.266691  673200 status.go:253] checking status of multinode-20220329174520-564087-m03 ...
	I0329 17:55:39.266942  673200 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m03 --format={{.State.Status}}
	I0329 17:55:39.299951  673200 status.go:328] multinode-20220329174520-564087-m03 host status = "Stopped" (err=<nil>)
	I0329 17:55:39.299976  673200 status.go:341] host is not running, skipping remaining checks
	I0329 17:55:39.299981  673200 status.go:255] multinode-20220329174520-564087-m03 status: &{Name:multinode-20220329174520-564087-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 node start m03 --alsologtostderr: (23.774971537s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (106.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220329174520-564087
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220329174520-564087
E0329 17:56:17.159209  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220329174520-564087: (22.690266076s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true -v=8 --alsologtostderr
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true -v=8 --alsologtostderr: (1m24.024865664s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220329174520-564087
--- PASS: TestMultiNode/serial/RestartKeepsNodes (106.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 node delete m03: (4.645754686s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 stop
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220329174520-564087 stop: (21.581583549s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220329174520-564087 status: exit status 7 (127.823859ms)

                                                
                                                
-- stdout --
	multinode-20220329174520-564087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329174520-564087-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr: exit status 7 (125.703732ms)

                                                
                                                
-- stdout --
	multinode-20220329174520-564087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329174520-564087-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:58:17.878924  688141 out.go:297] Setting OutFile to fd 1 ...
	I0329 17:58:17.879035  688141 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:58:17.879044  688141 out.go:310] Setting ErrFile to fd 2...
	I0329 17:58:17.879048  688141 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:58:17.879167  688141 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 17:58:17.879362  688141 out.go:304] Setting JSON to false
	I0329 17:58:17.879391  688141 mustload.go:65] Loading cluster: multinode-20220329174520-564087
	I0329 17:58:17.879770  688141 config.go:176] Loaded profile config "multinode-20220329174520-564087": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:58:17.879795  688141 status.go:253] checking status of multinode-20220329174520-564087 ...
	I0329 17:58:17.880211  688141 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087 --format={{.State.Status}}
	I0329 17:58:17.912538  688141 status.go:328] multinode-20220329174520-564087 host status = "Stopped" (err=<nil>)
	I0329 17:58:17.912570  688141 status.go:341] host is not running, skipping remaining checks
	I0329 17:58:17.912576  688141 status.go:255] multinode-20220329174520-564087 status: &{Name:multinode-20220329174520-564087 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 17:58:17.912635  688141 status.go:253] checking status of multinode-20220329174520-564087-m02 ...
	I0329 17:58:17.912913  688141 cli_runner.go:133] Run: docker container inspect multinode-20220329174520-564087-m02 --format={{.State.Status}}
	I0329 17:58:17.944872  688141 status.go:328] multinode-20220329174520-564087-m02 host status = "Stopped" (err=<nil>)
	I0329 17:58:17.944901  688141 status.go:341] host is not running, skipping remaining checks
	I0329 17:58:17.944908  688141 status.go:255] multinode-20220329174520-564087-m02 status: &{Name:multinode-20220329174520-564087-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220329174520-564087 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.605459197s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220329174520-564087 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220329174520-564087
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220329174520-564087-m02 --driver=docker  --container-runtime=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220329174520-564087-m02 --driver=docker  --container-runtime=docker: exit status 14 (73.92325ms)

                                                
                                                
-- stdout --
	* [multinode-20220329174520-564087-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220329174520-564087-m02' is duplicated with machine name 'multinode-20220329174520-564087-m02' in profile 'multinode-20220329174520-564087'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220329174520-564087-m03 --driver=docker  --container-runtime=docker
E0329 17:59:18.011310  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 17:59:30.085236  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220329174520-564087-m03 --driver=docker  --container-runtime=docker: (26.714155459s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220329174520-564087
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220329174520-564087: exit status 80 (351.516002ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220329174520-564087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220329174520-564087-m03 already exists in multinode-20220329174520-564087-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220329174520-564087-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220329174520-564087-m03: (2.420082949s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.62s)

                                                
                                    
x
+
TestPreload (124.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220329175953-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0329 18:00:53.131125  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220329175953-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m21.809443335s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220329175953-564087 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220329175953-564087 -- docker pull gcr.io/k8s-minikube/busybox: (1.459447668s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220329175953-564087 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
E0329 18:01:17.158462  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220329175953-564087 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (37.897817079s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220329175953-564087 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220329175953-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220329175953-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220329175953-564087: (2.516048409s)
--- PASS: TestPreload (124.05s)

                                                
                                    
x
+
TestScheduledStopUnix (99.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220329180157-564087 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220329180157-564087 --memory=2048 --driver=docker  --container-runtime=docker: (26.318565493s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220329180157-564087 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220329180157-564087 -n scheduled-stop-20220329180157-564087
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220329180157-564087 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220329180157-564087 --cancel-scheduled
E0329 18:02:40.204628  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220329180157-564087 -n scheduled-stop-20220329180157-564087
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220329180157-564087
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220329180157-564087 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220329180157-564087
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220329180157-564087: exit status 7 (94.569613ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220329180157-564087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220329180157-564087 -n scheduled-stop-20220329180157-564087
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220329180157-564087 -n scheduled-stop-20220329180157-564087: exit status 7 (91.700045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220329180157-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220329180157-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220329180157-564087: (1.907531477s)
--- PASS: TestScheduledStopUnix (99.95s)

                                                
                                    
x
+
TestSkaffold (59.93s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe3069100590 version
skaffold_test.go:61: skaffold version: v1.37.0
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220329180337-564087 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220329180337-564087 --memory=2600 --driver=docker  --container-runtime=docker: (25.790113951s)
skaffold_test.go:84: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:108: (dbg) Run:  /tmp/skaffold.exe3069100590 run --minikube-profile skaffold-20220329180337-564087 --kube-context skaffold-20220329180337-564087 --status-check=true --port-forward=false --interactive=false
E0329 18:04:18.011065  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
skaffold_test.go:108: (dbg) Done: /tmp/skaffold.exe3069100590 run --minikube-profile skaffold-20220329180337-564087 --kube-context skaffold-20220329180337-564087 --status-check=true --port-forward=false --interactive=false: (20.922279352s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-6b6574d886-5lwgb" [7915de30-ae0a-4bfc-bd97-4edb9123a95b] Running
E0329 18:04:30.084663  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010691334s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-54b9cc74dc-89n4t" [a068cb14-95a0-4e94-a17a-d7a1afe39006] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006692307s
helpers_test.go:176: Cleaning up "skaffold-20220329180337-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220329180337-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220329180337-564087: (2.542042534s)
--- PASS: TestSkaffold (59.93s)

                                                
                                    
x
+
TestInsufficientStorage (14.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220329180437-564087 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220329180437-564087 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.884445778s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0122e51d-c1c0-4ff6-8de6-a53235e390d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220329180437-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3d054c5-6c76-4135-985a-ead24225ac12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"8c9e24b3-ecdf-48d0-ace2-83e8b965aed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1fa8069-8d83-412f-9010-941b83ebdfb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig"}}
	{"specversion":"1.0","id":"57edfab2-be48-41c2-a42f-737c002b0a93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube"}}
	{"specversion":"1.0","id":"651638da-35cc-4598-b267-25626e7df05b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1ff9a555-49ef-4270-bc85-5c397220b459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b45e4c06-929e-4c7e-949f-5dd0e863ee1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bf197705-dbd5-4128-a8c0-d3a1033f21c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ea5e148-5a83-4f80-86cf-05d43c2c19b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"d3934551-0b37-45bd-9208-96eb14424431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"1e1124df-7cff-47d8-8afe-f882bc704d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220329180437-564087 in cluster insufficient-storage-20220329180437-564087","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"181ab110-3c6f-4d30-9100-397bfa92ecc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e201634b-1b0c-4d61-b903-7a29074502fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4488918c-c3a4-4c0f-83ec-95df10863d10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220329180437-564087 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220329180437-564087 --output=json --layout=cluster: exit status 7 (338.894316ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329180437-564087","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329180437-564087","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 18:04:49.907675  722308 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329180437-564087" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220329180437-564087 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220329180437-564087 --output=json --layout=cluster: exit status 7 (349.538353ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329180437-564087","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329180437-564087","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 18:04:50.257919  722409 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329180437-564087" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	E0329 18:04:50.266195  722409 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/insufficient-storage-20220329180437-564087/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220329180437-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220329180437-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220329180437-564087: (1.966301625s)
--- PASS: TestInsufficientStorage (14.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.2003628286.exe start -p running-upgrade-20220329180631-564087 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.2003628286.exe start -p running-upgrade-20220329180631-564087 --memory=2200 --vm-driver=docker  --container-runtime=docker: (37.193506308s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220329180631-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220329180631-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.435385881s)
helpers_test.go:176: Cleaning up "running-upgrade-20220329180631-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220329180631-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220329180631-564087: (2.334047766s)
--- PASS: TestRunningBinaryUpgrade (75.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (99.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.538691486s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220329180617-564087
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220329180617-564087: (1.381480158s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220329180617-564087 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220329180617-564087 status --format={{.Host}}: exit status 7 (106.878406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.790342847s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220329180617-564087 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (94.727601ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220329180617-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220329180617-564087
	    minikube start -p kubernetes-upgrade-20220329180617-564087 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329180617-5640872 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329180617-564087 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220329180617-564087 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (15.910570266s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220329180617-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220329180617-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220329180617-564087: (3.04065372s)
--- PASS: TestKubernetesUpgrade (99.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (98.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3800264130.exe start -p missing-upgrade-20220329180542-564087 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3800264130.exe start -p missing-upgrade-20220329180542-564087 --memory=2200 --driver=docker  --container-runtime=docker: (35.72628689s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220329180542-564087

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220329180542-564087: (10.386205304s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220329180542-564087
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220329180542-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220329180542-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.373216073s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220329180542-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220329180542-564087
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220329180542-564087: (9.947566731s)
--- PASS: TestMissingContainerUpgrade (98.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (98.335879ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220329180452-564087] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --driver=docker  --container-runtime=docker: (46.886164647s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220329180452-564087 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.1273639220.exe start -p stopped-upgrade-20220329180452-564087 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.1273639220.exe start -p stopped-upgrade-20220329180452-564087 --memory=2200 --vm-driver=docker  --container-runtime=docker: (52.198403451s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.1273639220.exe -p stopped-upgrade-20220329180452-564087 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.1273639220.exe -p stopped-upgrade-20220329180452-564087 stop: (12.513206538s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220329180452-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220329180452-564087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.502851099s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --driver=docker  --container-runtime=docker: (16.412204497s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220329180452-564087 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220329180452-564087 status -o json: exit status 2 (376.293143ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220329180452-564087","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220329180452-564087

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220329180452-564087: (2.292914367s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --no-kubernetes --driver=docker  --container-runtime=docker: (6.37224818s)
--- PASS: TestNoKubernetes/serial/Start (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220329180452-564087 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220329180452-564087 "sudo systemctl is-active --quiet service kubelet": exit status 1 (394.832982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.02337063s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220329180452-564087
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220329180452-564087: (1.334921778s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220329180452-564087 --driver=docker  --container-runtime=docker: (6.431102803s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220329180452-564087 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220329180452-564087 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.69554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220329180452-564087
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220329180452-564087: (1.43322083s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestPause/serial/Start (45.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220329180757-564087 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220329180757-564087 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (45.055232464s)
--- PASS: TestPause/serial/Start (45.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220329180757-564087 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220329180757-564087 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.290177023s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220329180757-564087 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220329180757-564087 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220329180757-564087 --output=json --layout=cluster: exit status 2 (388.906017ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220329180757-564087","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220329180757-564087","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220329180757-564087 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220329180757-564087 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.51s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220329180757-564087 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220329180757-564087 --alsologtostderr -v=5: (2.506881357s)
--- PASS: TestPause/serial/DeletePaused (2.51s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220329180757-564087
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220329180757-564087: exit status 1 (31.658816ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220329180757-564087

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (319.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220329180858-564087 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0329 18:09:01.060298  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:09:18.011270  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:09:25.128014  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.133280  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.143523  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.163798  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.204105  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.284435  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:25.444825  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220329180858-564087 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m19.729015236s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (319.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220329180928-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0329 18:09:30.084674  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
E0329 18:09:30.246950  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:35.367572  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:09:45.608187  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220329180928-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (54.136465406s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (291.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220329181004-564087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0329 18:10:06.089277  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220329181004-564087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m51.599001482s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (291.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329180928-564087 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [61d0cc7f-9ebc-4748-b38f-abdd2588273b] Pending
helpers_test.go:343: "busybox" [61d0cc7f-9ebc-4748-b38f-abdd2588273b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [61d0cc7f-9ebc-4748-b38f-abdd2588273b] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.011716951s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329180928-564087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220329180928-564087 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220329180928-564087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220329180928-564087 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220329180928-564087 --alsologtostderr -v=3: (10.869967023s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087: exit status 7 (101.66219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220329180928-564087 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220329180928-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0329 18:10:47.049433  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220329180928-564087 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (5m37.801798942s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220329181100-564087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0329 18:11:17.157976  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
E0329 18:12:08.970150  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:14:18.010626  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220329181100-564087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m49.092158737s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329180858-564087 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b014fa01-76f0-4204-b6e6-a481153781f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [b014fa01-76f0-4204-b6e6-a481153781f4] Running
E0329 18:14:25.127862  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.0115275s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329180858-564087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220329180858-564087 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220329180858-564087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220329180858-564087 --alsologtostderr -v=3
E0329 18:14:30.084731  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220329180858-564087 --alsologtostderr -v=3: (10.865651152s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087: exit status 7 (101.642084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220329180858-564087 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (561.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220329180858-564087 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0329 18:14:52.810940  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220329180858-564087 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (9m21.471537201s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (561.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329181004-564087 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [26364219-4e4b-4766-92b2-516d6b7e8363] Pending
helpers_test.go:343: "busybox" [26364219-4e4b-4766-92b2-516d6b7e8363] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [26364219-4e4b-4766-92b2-516d6b7e8363] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.01127829s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329181004-564087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220329181004-564087 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220329181004-564087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220329181004-564087 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220329181004-564087 --alsologtostderr -v=3: (10.847616085s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087: exit status 7 (96.662941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220329181004-564087 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (571.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220329181004-564087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220329181004-564087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (9m31.478104407s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (571.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329181100-564087 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [81fdd287-9653-416f-8177-b5e009324df5] Pending
helpers_test.go:343: "busybox" [81fdd287-9653-416f-8177-b5e009324df5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [81fdd287-9653-416f-8177-b5e009324df5] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.014491463s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329181100-564087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220329181100-564087 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220329181100-564087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220329181100-564087 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220329181100-564087 --alsologtostderr -v=3: (10.816540843s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087: exit status 7 (95.797747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220329181100-564087 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (322.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220329181100-564087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0329 18:16:17.157970  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220329181100-564087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (5m21.939452892s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (322.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq624" [b13f7fa9-2182-4332-8a48-d1e50e884afb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq624" [b13f7fa9-2182-4332-8a48-d1e50e884afb] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.014014087s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq624" [b13f7fa9-2182-4332-8a48-d1e50e884afb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006965446s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220329180928-564087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220329180928-564087 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220329180928-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087: exit status 2 (390.293922ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087: exit status 2 (389.347015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220329180928-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220329180928-564087 -n no-preload-20220329180928-564087
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220329181641-564087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220329181641-564087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (40.411331353s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220329181641-564087 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220329181641-564087 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220329181641-564087 --alsologtostderr -v=3: (10.777773469s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
E0329 18:17:33.131678  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329171943-564087/client.crt: no such file or directory
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087: exit status 7 (98.008599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220329181641-564087 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220329181641-564087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220329181641-564087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (19.957886697s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220329181641-564087 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220329181641-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087: exit status 2 (393.516587ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087: exit status 2 (390.084495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220329181641-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220329181641-564087 -n newest-cni-20220329181641-564087
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (41.993910262s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220329180853-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220329180853-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-qh8rl" [f8e3045f-f9b1-483d-904e-c2c16ca06fe3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-qh8rl" [f8e3045f-f9b1-483d-904e-c2c16ca06fe3] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005979692s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vs4v5" [214e9dae-33a7-4a61-aa8d-413b885ca453] Pending

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vs4v5" [214e9dae-33a7-4a61-aa8d-413b885ca453] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vs4v5" [214e9dae-33a7-4a61-aa8d-413b885ca453] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.012096971s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vs4v5" [214e9dae-33a7-4a61-aa8d-413b885ca453] Running
E0329 18:21:44.982220  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007530032s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220329181100-564087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220329181100-564087 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220329181100-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087: exit status 2 (386.089597ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087: exit status 2 (384.625598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220329181100-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329181100-564087 -n default-k8s-different-port-20220329181100-564087
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (288.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (4m48.240106283s)
--- PASS: TestNetworkPlugins/group/false/Start (288.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (95.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker
E0329 18:23:42.756723  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:42.762041  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:42.772295  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:42.792563  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:42.832825  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:42.913169  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:43.073531  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:43.394226  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:44.035158  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:45.315637  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:47.876884  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
E0329 18:23:52.997558  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m35.667283093s)
--- PASS: TestNetworkPlugins/group/cilium/Start (95.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-r2czs" [f82115e4-b015-40d0-bbf2-0b445ab9c432] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0329 18:24:03.238626  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012020287s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-r2czs" [f82115e4-b015-40d0-bbf2-0b445ab9c432] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006507993s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220329180858-564087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220329180858-564087 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220329180858-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087: exit status 2 (388.181796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087: exit status 2 (390.86124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220329180858-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329180858-564087 -n old-k8s-version-20220329180858-564087
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-jqp9j" [8a65fb3a-2af2-4c73-a2ae-c8772a0a17ab] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012089493s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-jqp9j" [8a65fb3a-2af2-4c73-a2ae-c8772a0a17ab] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006355503s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220329181004-564087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220329181004-564087 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220329181004-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
E0329 18:24:59.389780  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087: exit status 2 (407.143161ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087: exit status 2 (422.807893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220329181004-564087 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220329181004-564087 -n embed-certs-20220329181004-564087
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-vj8tg" [03aec675-195a-4dda-8516-8a162cb41f37] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015048908s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220329180854-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220329180854-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-kwjdq" [38067107-8bce-47e0-a239-5f9da4902cd7] Pending
helpers_test.go:343: "netcat-668db85669-kwjdq" [38067107-8bce-47e0-a239-5f9da4902cd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0329 18:25:23.059557  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-kwjdq" [38067107-8bce-47e0-a239-5f9da4902cd7] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.005864207s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220329180854-564087 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220329180854-564087 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220329180854-564087 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0329 18:25:40.350365  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329180858-564087/client.crt: no such file or directory
E0329 18:25:41.060667  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329171213-564087/client.crt: no such file or directory
E0329 18:25:48.171801  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329180337-564087/client.crt: no such file or directory
E0329 18:25:50.099225  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.104996  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.115294  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.135914  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.176474  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.257516  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.417841  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.738241  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:50.744502  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329180928-564087/client.crt: no such file or directory
E0329 18:25:51.378630  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:52.658861  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:25:55.219766  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:26:00.340019  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:26:10.580669  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/default-k8s-different-port-20220329181100-564087/client.crt: no such file or directory
E0329 18:26:17.158360  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329174003-564087/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (42.775973412s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220329180853-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220329180853-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-7n9q2" [df91c5be-af6e-4301-9942-0939b4adbe37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-7n9q2" [df91c5be-af6e-4301-9942-0939b4adbe37] Running
E0329 18:26:26.600781  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329180853-564087/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007558961s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220329180854-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220329180854-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-bsg8h" [56764ca8-0d89-4ffe-955e-0b357241c069] Pending
helpers_test.go:343: "netcat-668db85669-bsg8h" [56764ca8-0d89-4ffe-955e-0b357241c069] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:343: "netcat-668db85669-bsg8h" [56764ca8-0d89-4ffe-955e-0b357241c069] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006831753s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220329180854-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (59.908611513s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (293.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (4m53.258105191s)
--- PASS: TestNetworkPlugins/group/bridge/Start (293.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (289.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220329180853-564087 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (4m49.932555476s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (289.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-spbmc" [e44f12eb-3917-42f0-a576-7c3206edcd2d] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015071619s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220329180854-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220329180854-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-tlstw" [d0e50ff0-1835-4d88-9539-38449afceea9] Pending
helpers_test.go:343: "netcat-668db85669-tlstw" [d0e50ff0-1835-4d88-9539-38449afceea9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-tlstw" [d0e50ff0-1835-4d88-9539-38449afceea9] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006560955s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220329180853-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220329180853-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-7h726" [cfd6589c-5faf-4533-ae75-577227cc943b] Pending
helpers_test.go:343: "netcat-668db85669-7h726" [cfd6589c-5faf-4533-ae75-577227cc943b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-668db85669-7h726" [cfd6589c-5faf-4533-ae75-577227cc943b] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00626411s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220329180853-564087 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220329180853-564087 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-t57st" [6cc97bd3-c73e-4e69-9405-cc9791ccb8ea] Pending
helpers_test.go:343: "netcat-668db85669-t57st" [6cc97bd3-c73e-4e69-9405-cc9791ccb8ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:343: "netcat-668db85669-t57st" [6cc97bd3-c73e-4e69-9405-cc9791ccb8ea] Running
E0329 18:38:02.801810  564087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13730-560726-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329180854-564087/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.005590037s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.23s)

                                                
                                    

Test skip (21/281)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:547: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220329181100-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220329181100-564087
--- SKIP: TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220329180853-564087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220329180853-564087
--- SKIP: TestNetworkPlugins/group/flannel (0.36s)

                                                
                                    
Copied to clipboard