Test Report: Docker_Linux 15985

                    
                      49d57361cbdf0d306690482a173cc4589bc1e918:2023-03-07:28216
                    
                

Test fail (2/313)

Order failed test Duration
205 TestMultiNode/serial/DeployApp2Nodes 6.34
206 TestMultiNode/serial/PingHostFrom2Pods 2.9
x
+
TestMultiNode/serial/DeployApp2Nodes (6.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-242095 -- rollout status deployment/busybox: (2.148141177s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.io: exit status 1 (179.429617ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-jvgsd could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-rfr2n -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.default: exit status 1 (194.993019ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-jvgsd could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-rfr2n -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (172.237589ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-jvgsd could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-rfr2n -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242095
helpers_test.go:235: (dbg) docker inspect multinode-242095:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614",
	        "Created": "2023-03-07T18:16:01.32530949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 787176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-07T18:16:01.685272554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ecf2c9654f2209c81fb249115d75cf7afa5e279e652d4cd7020a24755fb1b573",
	        "ResolvConfPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/hosts",
	        "LogPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614-json.log",
	        "Name": "/multinode-242095",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-242095:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-242095",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146-init/diff:/var/lib/docker/overlay2/919a933f2f65520d4dce55a67e6fc895e1b57558817c17c68c3332371c6bf864/diff:/var/lib/docker/overlay2/d65d2f46f6aacad358deb1fbc32f4b3a6f2fd572153e557e20e2df4757968368/diff:/var/lib/docker/overlay2/1518c45a2e2dce1bbb8c9aa4cc363e93df3a98ef694726780450519e31bd238c/diff:/var/lib/docker/overlay2/af8137f485c43770b622b3c06682d147962e52a09024fe1127c4012bd2b16dd1/diff:/var/lib/docker/overlay2/0c39a9c32c3420d15952bdaed361fbf502b7b7ec06ae5006d34ac5aebdd52b2e/diff:/var/lib/docker/overlay2/4b4c7c8f39851d9c713bdf69c47cda85bf28a7abbccd1efbfbfd2094a59ecf74/diff:/var/lib/docker/overlay2/a226f271a7dce28a16bd03338f4305d4cc5942639ea048bddc52d90676d5dadb/diff:/var/lib/docker/overlay2/798bf3f5849c5b37e64db134f6d6f0a76c77c3bc41de7f27b100e37cec888b0a/diff:/var/lib/docker/overlay2/8a955ad3c07447aaef0bf72a4fdd9c80dee7dd7b664319328958e91aa47723a5/diff:/var/lib/docker/overlay2/287fdc
6bbb7638b228c8d48f3a27342f66ab418bb1a026e7f4042650bac659c3/diff:/var/lib/docker/overlay2/56da69234005db78c51a0283d4e9cf00d88eb2f09ad16065d3d63438cab72528/diff:/var/lib/docker/overlay2/8adf80e19b4d86d17c4aab98e63b03e64aadfc167208fbd7a138f9351850ba3e/diff:/var/lib/docker/overlay2/b5fb9d46cd71c46fa6b95af53d84645498fefb689b6b7a8271ac64ef8b14f873/diff:/var/lib/docker/overlay2/7b83e52a5eeed93b87fdfb42fedd5e20a65c16b364a3173a412139fe87666842/diff:/var/lib/docker/overlay2/038dd5daad1ba03ab8124e662dbde6e352fd60b0920f19aa4b5f23f2c5d42e86/diff:/var/lib/docker/overlay2/29f9b656ab67e0347a7932337b58ccdf3f0846944fb64bb3e8b92d5150ccf75a/diff:/var/lib/docker/overlay2/e70566b6845919f6e856944a62e104bb99474342dcf9c33d0aa70679016659b4/diff:/var/lib/docker/overlay2/100bebad8422b6c9015de0846e887bd4347808552610c6b8c149e2030e4c0a1d/diff:/var/lib/docker/overlay2/b06220b91f876ad14e77f8e058436c8bd48a61be6c4ab1640a1abcbae75f9168/diff:/var/lib/docker/overlay2/630d79d8012f6bc27cf10474af460d069e1f90e142404e900cd06c51a4f4b3d2/diff:/var/lib/d
ocker/overlay2/54b4a84e08cfd660941bb5bd24b4b08c366980a2f60d6b5e7387d3bb4b7a20ae/diff:/var/lib/docker/overlay2/fe377ebe6634536001708957e3f740e03a688f0ed64d61c3a8a800d6b36cb0d4/diff:/var/lib/docker/overlay2/8b95cb1cb3d13f3c9c52bce66d5e61d04de2702ac15553fb72587d9589ec4d57/diff:/var/lib/docker/overlay2/6994ab173d1db5859fcc37a2387f6fd0ca92af2299f5f9b179c3a6de26e89965/diff:/var/lib/docker/overlay2/337e602a34c14c5c38c5e10f0742ae43130b8b6cc3cc07a10763f130a5809b5e/diff:/var/lib/docker/overlay2/65895bc80ed0fde627f0fe5b2eb2ff6c54ca83950ff46dbdb471f0629138b7da/diff:/var/lib/docker/overlay2/6a465ef5ff9312a8b2abf1d0eb61d0fc524542eb7c2d3836e42ac6cf9842233b/diff:/var/lib/docker/overlay2/c51e98fe15f6aff45a9234653cbc07f7d8e592c01233419c2aeb78b30f89b20f/diff:/var/lib/docker/overlay2/d6e942ab4944c8ad54cf1b8d146bfe8b2ff2a324e047c5c41f2451a2abe244e8/diff:/var/lib/docker/overlay2/07c1a0226e0ae9bc5a8a0dce15c688680a6802044b66d6b0087a3c904611d32a/diff:/var/lib/docker/overlay2/3e1ff08623a31836a6a7b281fbd7a3263b0e5e208067c31c91b8219fee6
31657/diff:/var/lib/docker/overlay2/231c5ac90e2b3b243b1a99debfa6af60f7d054158434d870678ff2ed600ed2b0/diff:/var/lib/docker/overlay2/658d368b80a6e77c7f4230d0cb1ea8ac7029426c32eeabbfe8aa64c69d696068/diff:/var/lib/docker/overlay2/422b8e31c25887c4d52d9d069ad2d1dbf68925474f40f49ee1097f62df7ad9e5/diff:/var/lib/docker/overlay2/6ae593499dac8a42852bb4bd3d84df42e373dbc6b211eee190c3a8785413ccf4/diff:/var/lib/docker/overlay2/9e6c3b3c7f3cee8a4b0334d3409c9e39a202d21b9f673a0c0f9a8bab27f4ce61/diff:/var/lib/docker/overlay2/2a49fdc47125f029948d1de86932b652faf6358a5f4b0cf15ec05a421f7c3678/diff:/var/lib/docker/overlay2/5a19340fed828a972bff409c5893b3125502e8ceb550fe2d8991015605076aff/diff:/var/lib/docker/overlay2/b4a05bca1441de84af28208a839c75f4af1f24c217fdaaf06b76e82f01fc4d25/diff:/var/lib/docker/overlay2/3ccc3fc608334d7d73cd5b962f04073071dd5372a6603c7804ba2593976c76e4/diff:/var/lib/docker/overlay2/973d6264ada7511f823a30269486644fc007fd9ec97372d6379a99aa5f2ad215/diff:/var/lib/docker/overlay2/7c8ac5df9ef9fdd654a01e1b48006689cf1fe0
24d2b06df6e41afb0d0d1ec5d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-242095",
	                "Source": "/var/lib/docker/volumes/multinode-242095/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-242095",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-242095",
	                "name.minikube.sigs.k8s.io": "multinode-242095",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8ab9b460df607dd60f8f2d7fc2844b5153a34764e944187ba30eb87168d23cf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8ab9b460df6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-242095": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1953c0fdb57",
	                        "multinode-242095"
	                    ],
	                    "NetworkID": "d64b017e2b06dcf471040ca17ec801bfd97cfecf0860c7ece05de26ea5806633",
	                    "EndpointID": "9931b741d35b07aba5ba0ceeba050940fa5e3ac65c8327d49ca9f042163c37a9",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-242095 -n multinode-242095
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 logs -n 25: (1.135188384s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-424778                                  | second-424778        | jenkins | v1.29.0 | 07 Mar 23 18:14 UTC | 07 Mar 23 18:15 UTC |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| delete  | -p second-424778                                  | second-424778        | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| delete  | -p first-421668                                   | first-421668         | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-1-794899 ssh -- ls                    | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| delete  | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p multinode-242095                               | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:17 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- apply -f                   | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- rollout                    | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- get pods -o                | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- get pods -o                | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:15:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:15:54.931777  786188 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:15:54.932212  786188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:15:54.932232  786188 out.go:309] Setting ErrFile to fd 2...
	I0307 18:15:54.932240  786188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:15:54.932496  786188 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:15:54.933613  786188 out.go:303] Setting JSON to false
	I0307 18:15:54.934936  786188 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7106,"bootTime":1678205849,"procs":744,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:15:54.935000  786188 start.go:135] virtualization: kvm guest
	I0307 18:15:54.937189  786188 out.go:177] * [multinode-242095] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:15:54.939220  786188 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:15:54.939058  786188 notify.go:220] Checking for updates...
	I0307 18:15:54.940809  786188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:15:54.942432  786188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:15:54.944014  786188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:15:54.945431  786188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:15:54.946842  786188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:15:54.948558  786188 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:15:55.017643  786188 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0307 18:15:55.017759  786188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:15:55.134130  786188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-07 18:15:55.12526107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:15:55.134265  786188 docker.go:294] overlay module found
	I0307 18:15:55.136437  786188 out.go:177] * Using the docker driver based on user configuration
	I0307 18:15:55.137805  786188 start.go:296] selected driver: docker
	I0307 18:15:55.137816  786188 start.go:857] validating driver "docker" against <nil>
	I0307 18:15:55.137831  786188 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:15:55.138561  786188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:15:55.253976  786188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-07 18:15:55.246025628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:15:55.254123  786188 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 18:15:55.254384  786188 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:15:55.256432  786188 out.go:177] * Using Docker driver with root privileges
	I0307 18:15:55.258010  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:15:55.258025  786188 cni.go:136] 0 nodes found, recommending kindnet
	I0307 18:15:55.258035  786188 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:15:55.258050  786188 start_flags.go:319] config:
	{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:15:55.259771  786188 out.go:177] * Starting control plane node multinode-242095 in cluster multinode-242095
	I0307 18:15:55.261202  786188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0307 18:15:55.262657  786188 out.go:177] * Pulling base image ...
	I0307 18:15:55.264006  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:15:55.264030  786188 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon
	I0307 18:15:55.264045  786188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 18:15:55.264057  786188 cache.go:57] Caching tarball of preloaded images
	I0307 18:15:55.264178  786188 preload.go:174] Found /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 18:15:55.264191  786188 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 18:15:55.264547  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:15:55.264573  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json: {Name:mkca2eae4602c84e1e5460196b84850da3483521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:15:55.326841  786188 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon, skipping pull
	I0307 18:15:55.326866  786188 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 exists in daemon, skipping load
	I0307 18:15:55.326885  786188 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:15:55.326942  786188 start.go:364] acquiring machines lock for multinode-242095: {Name:mk8dbb7646a5affb9e9bdbf371579a97af9f6e48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:15:55.327072  786188 start.go:368] acquired machines lock for "multinode-242095" in 100.418µs
	I0307 18:15:55.327111  786188 start.go:93] Provisioning new machine with config: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:15:55.327206  786188 start.go:125] createHost starting for "" (driver="docker")
	I0307 18:15:55.329448  786188 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 18:15:55.329736  786188 start.go:159] libmachine.API.Create for "multinode-242095" (driver="docker")
	I0307 18:15:55.329773  786188 client.go:168] LocalClient.Create starting
	I0307 18:15:55.329849  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem
	I0307 18:15:55.329890  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:15:55.329910  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:15:55.329993  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem
	I0307 18:15:55.330021  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:15:55.330036  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:15:55.330410  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 18:15:55.393404  786188 cli_runner.go:211] docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 18:15:55.393472  786188 network_create.go:281] running [docker network inspect multinode-242095] to gather additional debugging logs...
	I0307 18:15:55.393491  786188 cli_runner.go:164] Run: docker network inspect multinode-242095
	W0307 18:15:55.455828  786188 cli_runner.go:211] docker network inspect multinode-242095 returned with exit code 1
	I0307 18:15:55.455875  786188 network_create.go:284] error running [docker network inspect multinode-242095]: docker network inspect multinode-242095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242095 not found
	I0307 18:15:55.455887  786188 network_create.go:286] output of [docker network inspect multinode-242095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242095 not found
	
	** /stderr **
	I0307 18:15:55.455936  786188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:15:55.516686  786188 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ff68d98ad1f6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:f1:e9:e0} reservation:<nil>}
	I0307 18:15:55.517213  786188 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014d74b0}
	I0307 18:15:55.517242  786188 network_create.go:123] attempt to create docker network multinode-242095 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0307 18:15:55.517291  786188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242095 multinode-242095
	I0307 18:15:55.610545  786188 network_create.go:107] docker network multinode-242095 192.168.58.0/24 created
	I0307 18:15:55.610577  786188 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-242095" container
	I0307 18:15:55.610632  786188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 18:15:55.673127  786188 cli_runner.go:164] Run: docker volume create multinode-242095 --label name.minikube.sigs.k8s.io=multinode-242095 --label created_by.minikube.sigs.k8s.io=true
	I0307 18:15:55.737977  786188 oci.go:103] Successfully created a docker volume multinode-242095
	I0307 18:15:55.738085  786188 cli_runner.go:164] Run: docker run --rm --name multinode-242095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095 --entrypoint /usr/bin/test -v multinode-242095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -d /var/lib
	I0307 18:15:56.315501  786188 oci.go:107] Successfully prepared a docker volume multinode-242095
	I0307 18:15:56.315544  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:15:56.315567  786188 kic.go:190] Starting extracting preloaded images to volume ...
	I0307 18:15:56.315636  786188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 18:16:01.146072  786188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.830376075s)
	I0307 18:16:01.146121  786188 kic.go:199] duration metric: took 4.830547 seconds to extract preloaded images to volume
	W0307 18:16:01.146295  786188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 18:16:01.146448  786188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 18:16:01.262141  786188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-242095 --name multinode-242095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-242095 --network multinode-242095 --ip 192.168.58.2 --volume multinode-242095:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9
	I0307 18:16:01.692913  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Running}}
	I0307 18:16:01.761029  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:01.830535  786188 cli_runner.go:164] Run: docker exec multinode-242095 stat /var/lib/dpkg/alternatives/iptables
	I0307 18:16:01.949015  786188 oci.go:144] the created container "multinode-242095" has a running status.
	I0307 18:16:01.949052  786188 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa...
	I0307 18:16:02.153840  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0307 18:16:02.153892  786188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 18:16:02.270926  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:02.341390  786188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 18:16:02.341419  786188 kic_runner.go:114] Args: [docker exec --privileged multinode-242095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 18:16:02.460882  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:02.525919  786188 machine.go:88] provisioning docker machine ...
	I0307 18:16:02.525957  786188 ubuntu.go:169] provisioning hostname "multinode-242095"
	I0307 18:16:02.526029  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:02.592509  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:02.592972  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:02.592990  786188 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-242095 && echo "multinode-242095" | sudo tee /etc/hostname
	I0307 18:16:02.712150  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-242095
	
	I0307 18:16:02.712232  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:02.775415  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:02.775870  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:02.775893  786188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-242095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-242095/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-242095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:16:02.882900  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:16:02.882928  786188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15985-636026/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-636026/.minikube}
	I0307 18:16:02.882949  786188 ubuntu.go:177] setting up certificates
	I0307 18:16:02.882959  786188 provision.go:83] configureAuth start
	I0307 18:16:02.883018  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:02.944427  786188 provision.go:138] copyHostCerts
	I0307 18:16:02.944470  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:02.944501  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem, removing ...
	I0307 18:16:02.944511  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:02.944582  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem (1123 bytes)
	I0307 18:16:02.944658  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:02.944680  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem, removing ...
	I0307 18:16:02.944687  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:02.944713  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem (1679 bytes)
	I0307 18:16:02.944774  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:02.944792  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem, removing ...
	I0307 18:16:02.944801  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:02.944829  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem (1082 bytes)
	I0307 18:16:02.944886  786188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem org=jenkins.multinode-242095 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-242095]
	I0307 18:16:03.173353  786188 provision.go:172] copyRemoteCerts
	I0307 18:16:03.173413  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:16:03.173453  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.236744  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:03.318491  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 18:16:03.318553  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:16:03.335541  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 18:16:03.335587  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 18:16:03.351784  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 18:16:03.351827  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:16:03.367856  786188 provision.go:86] duration metric: configureAuth took 484.880035ms
	I0307 18:16:03.367876  786188 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:16:03.368041  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:03.368096  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.431225  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.431689  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.431711  786188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 18:16:03.547069  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0307 18:16:03.547100  786188 ubuntu.go:71] root file system type: overlay
	I0307 18:16:03.547249  786188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 18:16:03.547322  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.610630  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.611064  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.611124  786188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 18:16:03.727539  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 18:16:03.727609  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.790236  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.790664  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.790685  786188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 18:16:04.418495  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-07 18:16:03.723139738 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0307 18:16:04.418534  786188 machine.go:91] provisioned docker machine in 1.892591812s
	I0307 18:16:04.418549  786188 client.go:171] LocalClient.Create took 9.088765764s
	I0307 18:16:04.418571  786188 start.go:167] duration metric: libmachine.API.Create for "multinode-242095" took 9.088835023s
	I0307 18:16:04.418584  786188 start.go:300] post-start starting for "multinode-242095" (driver="docker")
	I0307 18:16:04.418597  786188 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:16:04.418664  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:16:04.418714  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.482678  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.570818  786188 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:16:04.573563  786188 command_runner.go:130] > NAME="Ubuntu"
	I0307 18:16:04.573584  786188 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0307 18:16:04.573590  786188 command_runner.go:130] > ID=ubuntu
	I0307 18:16:04.573597  786188 command_runner.go:130] > ID_LIKE=debian
	I0307 18:16:04.573613  786188 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0307 18:16:04.573620  786188 command_runner.go:130] > VERSION_ID="20.04"
	I0307 18:16:04.573628  786188 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0307 18:16:04.573634  786188 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0307 18:16:04.573642  786188 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0307 18:16:04.573654  786188 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0307 18:16:04.573665  786188 command_runner.go:130] > VERSION_CODENAME=focal
	I0307 18:16:04.573672  786188 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0307 18:16:04.573738  786188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:16:04.573754  786188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:16:04.573762  786188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:16:04.573770  786188 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0307 18:16:04.573780  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/addons for local assets ...
	I0307 18:16:04.573831  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/files for local assets ...
	I0307 18:16:04.573897  786188 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> 6427432.pem in /etc/ssl/certs
	I0307 18:16:04.573906  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /etc/ssl/certs/6427432.pem
	I0307 18:16:04.573977  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:16:04.580100  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:16:04.596589  786188 start.go:303] post-start completed in 177.993098ms
	I0307 18:16:04.596958  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:04.660545  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:04.660791  786188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:16:04.660836  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.724855  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.807361  786188 command_runner.go:130] > 16%!
	(MISSING)I0307 18:16:04.807427  786188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:16:04.811084  786188 command_runner.go:130] > 245G
	I0307 18:16:04.811118  786188 start.go:128] duration metric: createHost completed in 9.48390237s
	I0307 18:16:04.811127  786188 start.go:83] releasing machines lock for "multinode-242095", held for 9.484040996s
	I0307 18:16:04.811178  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:04.874778  786188 ssh_runner.go:195] Run: cat /version.json
	I0307 18:16:04.874829  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.874907  786188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:16:04.874998  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.943415  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.943867  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:05.058478  786188 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 18:16:05.059801  786188 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1677262057-15923", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
	I0307 18:16:05.059924  786188 ssh_runner.go:195] Run: systemctl --version
	I0307 18:16:05.063547  786188 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0307 18:16:05.063574  786188 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0307 18:16:05.063638  786188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:16:05.067045  786188 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0307 18:16:05.067062  786188 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0307 18:16:05.067068  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2131168     Links: 1
	I0307 18:16:05.067078  786188 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:16:05.067087  786188 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:16:05.067096  786188 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:16:05.067106  786188 command_runner.go:130] > Change: 2023-03-07 18:01:36.367924495 +0000
	I0307 18:16:05.067111  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:05.067281  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:16:05.086271  786188 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:16:05.086323  786188 ssh_runner.go:195] Run: which cri-dockerd
	I0307 18:16:05.088875  786188 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 18:16:05.089038  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 18:16:05.095410  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 18:16:05.107501  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:16:05.121355  786188 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0307 18:16:05.121405  786188 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 18:16:05.121420  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:16:05.121453  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:16:05.121557  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:16:05.133192  786188 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:16:05.133212  786188 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:16:05.133277  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:16:05.140361  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:16:05.147468  786188 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:16:05.147522  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:16:05.154740  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:16:05.161816  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:16:05.168791  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:16:05.176118  786188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:16:05.182721  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:16:05.189835  786188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:16:05.195321  786188 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 18:16:05.196318  786188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:16:05.203660  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:05.274487  786188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:16:05.357224  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:16:05.357276  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:16:05.357330  786188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 18:16:05.366182  786188 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0307 18:16:05.366278  786188 command_runner.go:130] > [Unit]
	I0307 18:16:05.366297  786188 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 18:16:05.366309  786188 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 18:16:05.366316  786188 command_runner.go:130] > BindsTo=containerd.service
	I0307 18:16:05.366326  786188 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0307 18:16:05.366338  786188 command_runner.go:130] > Wants=network-online.target
	I0307 18:16:05.366349  786188 command_runner.go:130] > Requires=docker.socket
	I0307 18:16:05.366357  786188 command_runner.go:130] > StartLimitBurst=3
	I0307 18:16:05.366367  786188 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 18:16:05.366375  786188 command_runner.go:130] > [Service]
	I0307 18:16:05.366383  786188 command_runner.go:130] > Type=notify
	I0307 18:16:05.366393  786188 command_runner.go:130] > Restart=on-failure
	I0307 18:16:05.366409  786188 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 18:16:05.366427  786188 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 18:16:05.366441  786188 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 18:16:05.366454  786188 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 18:16:05.366467  786188 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 18:16:05.366481  786188 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 18:16:05.366497  786188 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 18:16:05.366516  786188 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 18:16:05.366531  786188 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 18:16:05.366540  786188 command_runner.go:130] > ExecStart=
	I0307 18:16:05.366566  786188 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0307 18:16:05.366579  786188 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 18:16:05.366592  786188 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 18:16:05.366623  786188 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 18:16:05.366634  786188 command_runner.go:130] > LimitNOFILE=infinity
	I0307 18:16:05.366641  786188 command_runner.go:130] > LimitNPROC=infinity
	I0307 18:16:05.366650  786188 command_runner.go:130] > LimitCORE=infinity
	I0307 18:16:05.366660  786188 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 18:16:05.366671  786188 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 18:16:05.366680  786188 command_runner.go:130] > TasksMax=infinity
	I0307 18:16:05.366687  786188 command_runner.go:130] > TimeoutStartSec=0
	I0307 18:16:05.366711  786188 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 18:16:05.366723  786188 command_runner.go:130] > Delegate=yes
	I0307 18:16:05.366734  786188 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 18:16:05.366744  786188 command_runner.go:130] > KillMode=process
	I0307 18:16:05.366759  786188 command_runner.go:130] > [Install]
	I0307 18:16:05.366769  786188 command_runner.go:130] > WantedBy=multi-user.target
	I0307 18:16:05.367066  786188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0307 18:16:05.367125  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:16:05.377113  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:16:05.388695  786188 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:16:05.388722  786188 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:16:05.389568  786188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 18:16:05.495258  786188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 18:16:05.573651  786188 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 18:16:05.573691  786188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 18:16:05.602578  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:05.676496  786188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:16:05.880899  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:16:05.890011  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0307 18:16:05.957675  786188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 18:16:06.029343  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:16:06.104607  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:06.173711  786188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 18:16:06.184480  786188 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 18:16:06.184553  786188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 18:16:06.187420  786188 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 18:16:06.187459  786188 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 18:16:06.187469  786188 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0307 18:16:06.187480  786188 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0307 18:16:06.187495  786188 command_runner.go:130] > Access: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187504  786188 command_runner.go:130] > Modify: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187510  786188 command_runner.go:130] > Change: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187514  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:06.187532  786188 start.go:553] Will wait 60s for crictl version
	I0307 18:16:06.187576  786188 ssh_runner.go:195] Run: which crictl
	I0307 18:16:06.190013  786188 command_runner.go:130] > /usr/bin/crictl
	I0307 18:16:06.190159  786188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:16:06.266348  786188 command_runner.go:130] > Version:  0.1.0
	I0307 18:16:06.266372  786188 command_runner.go:130] > RuntimeName:  docker
	I0307 18:16:06.266379  786188 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0307 18:16:06.266388  786188 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 18:16:06.268109  786188 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0307 18:16:06.268178  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:16:06.290135  786188 command_runner.go:130] > 23.0.1
	I0307 18:16:06.290205  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:16:06.310464  786188 command_runner.go:130] > 23.0.1
	I0307 18:16:06.315381  786188 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0307 18:16:06.315483  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:16:06.381405  786188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0307 18:16:06.384852  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:16:06.393931  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:06.393983  786188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:16:06.410274  786188 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 18:16:06.410296  786188 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 18:16:06.410305  786188 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 18:16:06.410315  786188 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 18:16:06.410323  786188 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 18:16:06.410329  786188 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 18:16:06.410340  786188 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 18:16:06.410348  786188 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:06.411468  786188 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 18:16:06.411490  786188 docker.go:560] Images already preloaded, skipping extraction
	I0307 18:16:06.411547  786188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:16:06.427506  786188 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 18:16:06.427531  786188 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 18:16:06.427548  786188 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 18:16:06.427555  786188 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 18:16:06.427560  786188 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 18:16:06.427564  786188 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 18:16:06.427569  786188 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 18:16:06.427577  786188 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:06.428726  786188 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 18:16:06.428744  786188 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:16:06.428801  786188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 18:16:06.451260  786188 command_runner.go:130] > cgroupfs
	I0307 18:16:06.451332  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:16:06.451346  786188 cni.go:136] 1 nodes found, recommending kindnet
	I0307 18:16:06.451360  786188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:16:06.451385  786188 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-242095 NodeName:multinode-242095 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:16:06.451564  786188 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-242095"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:16:06.451664  786188 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-242095 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:16:06.451718  786188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 18:16:06.458500  786188 command_runner.go:130] > kubeadm
	I0307 18:16:06.458519  786188 command_runner.go:130] > kubectl
	I0307 18:16:06.458525  786188 command_runner.go:130] > kubelet
	I0307 18:16:06.458547  786188 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:16:06.458594  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:16:06.465190  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0307 18:16:06.477749  786188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:16:06.490028  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0307 18:16:06.502011  786188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:16:06.504682  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:16:06.513322  786188 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095 for IP: 192.168.58.2
	I0307 18:16:06.513347  786188 certs.go:186] acquiring lock for shared ca certs: {Name:mk6aa9dfc4b93dc10fe6d5a07411d8b3adb46804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.513489  786188 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key
	I0307 18:16:06.513530  786188 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key
	I0307 18:16:06.513587  786188 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key
	I0307 18:16:06.513600  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt with IP's: []
	I0307 18:16:06.751218  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt ...
	I0307 18:16:06.751252  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt: {Name:mk3556412664174b1430b247b49895322a37a5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.751419  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key ...
	I0307 18:16:06.751431  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key: {Name:mkde36e5a541677c98da0cfe15583bfe6e293f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.751526  786188 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041
	I0307 18:16:06.751540  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0307 18:16:06.906547  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 ...
	I0307 18:16:06.906577  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041: {Name:mk64e669a3624bbba51cf370217c5818c5e82f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.906717  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041 ...
	I0307 18:16:06.906727  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041: {Name:mkb0d02d3cff1f19e6e6f14f079c5837a5b6505a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.906782  786188 certs.go:333] copying /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt
	I0307 18:16:06.906848  786188 certs.go:337] copying /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key
	I0307 18:16:06.906907  786188 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key
	I0307 18:16:06.906920  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt with IP's: []
	I0307 18:16:06.959740  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt ...
	I0307 18:16:06.959765  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt: {Name:mk3368b479dd550d3cdec9cba98713d2e9e8e080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.959881  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key ...
	I0307 18:16:06.959895  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key: {Name:mkb1a5e254ccb5b4b9145f97db39e4f420a21824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.959961  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 18:16:06.959978  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 18:16:06.959987  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 18:16:06.959999  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 18:16:06.960012  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 18:16:06.960027  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 18:16:06.960040  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 18:16:06.960049  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 18:16:06.960107  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem (1338 bytes)
	W0307 18:16:06.960140  786188 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743_empty.pem, impossibly tiny 0 bytes
	I0307 18:16:06.960151  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:16:06.960178  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:16:06.960201  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:16:06.960224  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem (1679 bytes)
	I0307 18:16:06.960259  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:16:06.960283  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:06.960296  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem -> /usr/share/ca-certificates/642743.pem
	I0307 18:16:06.960307  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /usr/share/ca-certificates/6427432.pem
	I0307 18:16:06.960864  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 18:16:06.979312  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:16:06.995980  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:16:07.012899  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 18:16:07.029138  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:16:07.045508  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 18:16:07.061491  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:16:07.077436  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:16:07.093037  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:16:07.109204  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem --> /usr/share/ca-certificates/642743.pem (1338 bytes)
	I0307 18:16:07.125050  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /usr/share/ca-certificates/6427432.pem (1708 bytes)
	I0307 18:16:07.141153  786188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:16:07.152851  786188 ssh_runner.go:195] Run: openssl version
	I0307 18:16:07.157179  786188 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0307 18:16:07.157249  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6427432.pem && ln -fs /usr/share/ca-certificates/6427432.pem /etc/ssl/certs/6427432.pem"
	I0307 18:16:07.164041  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166774  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166844  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166893  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.171158  786188 command_runner.go:130] > 3ec20f2e
	I0307 18:16:07.171350  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6427432.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:16:07.178077  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:16:07.184784  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187383  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187494  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187537  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.191793  786188 command_runner.go:130] > b5213941
	I0307 18:16:07.191828  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:16:07.198179  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/642743.pem && ln -fs /usr/share/ca-certificates/642743.pem /etc/ssl/certs/642743.pem"
	I0307 18:16:07.204766  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207382  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207433  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207483  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.211548  786188 command_runner.go:130] > 51391683
	I0307 18:16:07.211691  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/642743.pem /etc/ssl/certs/51391683.0"
	I0307 18:16:07.218110  786188 kubeadm.go:401] StartCluster: {Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:16:07.218215  786188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 18:16:07.234150  786188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:16:07.240431  786188 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0307 18:16:07.240449  786188 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0307 18:16:07.240454  786188 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0307 18:16:07.240493  786188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:16:07.247416  786188 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0307 18:16:07.247475  786188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:16:07.253606  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0307 18:16:07.253635  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0307 18:16:07.253644  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0307 18:16:07.253655  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:16:07.253693  786188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:16:07.253730  786188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 18:16:07.297573  786188 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0307 18:16:07.297605  786188 command_runner.go:130] > [init] Using Kubernetes version: v1.26.2
	I0307 18:16:07.297666  786188 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 18:16:07.297677  786188 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 18:16:07.331515  786188 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:16:07.331546  786188 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:16:07.331618  786188 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:16:07.331628  786188 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:16:07.331663  786188 kubeadm.go:322] OS: Linux
	I0307 18:16:07.331669  786188 command_runner.go:130] > OS: Linux
	I0307 18:16:07.331708  786188 kubeadm.go:322] CGROUPS_CPU: enabled
	I0307 18:16:07.331714  786188 command_runner.go:130] > CGROUPS_CPU: enabled
	I0307 18:16:07.331752  786188 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0307 18:16:07.331762  786188 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0307 18:16:07.331834  786188 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0307 18:16:07.331841  786188 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0307 18:16:07.331878  786188 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0307 18:16:07.331884  786188 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0307 18:16:07.331927  786188 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0307 18:16:07.331933  786188 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0307 18:16:07.331992  786188 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0307 18:16:07.332022  786188 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0307 18:16:07.332085  786188 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0307 18:16:07.332094  786188 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0307 18:16:07.332167  786188 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0307 18:16:07.332188  786188 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0307 18:16:07.332234  786188 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0307 18:16:07.332243  786188 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0307 18:16:07.396137  786188 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:16:07.396168  786188 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:16:07.396300  786188 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:16:07.396328  786188 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:16:07.396420  786188 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:16:07.396431  786188 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:16:07.523937  786188 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:16:07.528351  786188 out.go:204]   - Generating certificates and keys ...
	I0307 18:16:07.523972  786188 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:16:07.528496  786188 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 18:16:07.528529  786188 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0307 18:16:07.528577  786188 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 18:16:07.528584  786188 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0307 18:16:07.717859  786188 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:16:07.717887  786188 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:16:07.849570  786188 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:16:07.849601  786188 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:16:07.969943  786188 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0307 18:16:07.969972  786188 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0307 18:16:08.173499  786188 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0307 18:16:08.173527  786188 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0307 18:16:08.313210  786188 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0307 18:16:08.313238  786188 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0307 18:16:08.313407  786188 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.313436  786188 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.446657  786188 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0307 18:16:08.446689  786188 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0307 18:16:08.446821  786188 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.446852  786188 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.575809  786188 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:16:08.575834  786188 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:16:08.672952  786188 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:16:08.672981  786188 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:16:09.118621  786188 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0307 18:16:09.118692  786188 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0307 18:16:09.118763  786188 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:16:09.118784  786188 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:16:09.348007  786188 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:16:09.348035  786188 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:16:09.431312  786188 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:16:09.431346  786188 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:16:09.496928  786188 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:16:09.496955  786188 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:16:09.728080  786188 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:16:09.728113  786188 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:16:09.739432  786188 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:16:09.739481  786188 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:16:09.741537  786188 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:16:09.741564  786188 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:16:09.741635  786188 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 18:16:09.741651  786188 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 18:16:09.823426  786188 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:16:09.823487  786188 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:16:09.826138  786188 out.go:204]   - Booting up control plane ...
	I0307 18:16:09.826261  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:16:09.826317  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:16:09.826438  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:16:09.826471  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:16:09.827407  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:16:09.827430  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:16:09.828192  786188 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:16:09.828211  786188 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:16:09.829986  786188 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:16:09.830005  786188 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:16:18.332163  786188 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502131 seconds
	I0307 18:16:18.332189  786188 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502131 seconds
	I0307 18:16:18.332349  786188 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:16:18.332374  786188 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:16:18.343799  786188 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:16:18.343822  786188 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:16:18.860223  786188 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:16:18.860256  786188 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:16:18.860461  786188 kubeadm.go:322] [mark-control-plane] Marking the node multinode-242095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:16:18.860473  786188 command_runner.go:130] > [mark-control-plane] Marking the node multinode-242095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:16:19.369037  786188 kubeadm.go:322] [bootstrap-token] Using token: r7749e.dyce20vphzwpiu0j
	I0307 18:16:19.370708  786188 out.go:204]   - Configuring RBAC rules ...
	I0307 18:16:19.369134  786188 command_runner.go:130] > [bootstrap-token] Using token: r7749e.dyce20vphzwpiu0j
	I0307 18:16:19.370854  786188 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:16:19.370873  786188 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:16:19.373850  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:16:19.373870  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:16:19.380490  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:16:19.380511  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:16:19.382957  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:16:19.382975  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:16:19.385405  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:16:19.385425  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:16:19.387599  786188 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:16:19.387616  786188 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:16:19.396513  786188 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:16:19.396535  786188 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:16:19.611118  786188 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0307 18:16:19.611157  786188 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0307 18:16:19.795904  786188 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0307 18:16:19.795932  786188 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0307 18:16:19.797311  786188 kubeadm.go:322] 
	I0307 18:16:19.797419  786188 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0307 18:16:19.797445  786188 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0307 18:16:19.797452  786188 kubeadm.go:322] 
	I0307 18:16:19.797531  786188 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0307 18:16:19.797549  786188 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0307 18:16:19.797560  786188 kubeadm.go:322] 
	I0307 18:16:19.797592  786188 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0307 18:16:19.797599  786188 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0307 18:16:19.797655  786188 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:16:19.797668  786188 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:16:19.797733  786188 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:16:19.797741  786188 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:16:19.797744  786188 kubeadm.go:322] 
	I0307 18:16:19.797808  786188 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0307 18:16:19.797819  786188 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0307 18:16:19.797828  786188 kubeadm.go:322] 
	I0307 18:16:19.797910  786188 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:16:19.797928  786188 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:16:19.797938  786188 kubeadm.go:322] 
	I0307 18:16:19.798003  786188 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0307 18:16:19.798021  786188 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0307 18:16:19.798122  786188 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:16:19.798134  786188 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:16:19.798220  786188 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:16:19.798234  786188 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:16:19.798239  786188 kubeadm.go:322] 
	I0307 18:16:19.798355  786188 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:16:19.798379  786188 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:16:19.798487  786188 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0307 18:16:19.798500  786188 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0307 18:16:19.798505  786188 kubeadm.go:322] 
	I0307 18:16:19.798602  786188 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.798621  786188 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.798793  786188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb \
	I0307 18:16:19.798804  786188 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb \
	I0307 18:16:19.798822  786188 kubeadm.go:322] 	--control-plane 
	I0307 18:16:19.798843  786188 command_runner.go:130] > 	--control-plane 
	I0307 18:16:19.798860  786188 kubeadm.go:322] 
	I0307 18:16:19.798947  786188 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:16:19.798962  786188 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:16:19.798967  786188 kubeadm.go:322] 
	I0307 18:16:19.799074  786188 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.799084  786188 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.799223  786188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:16:19.799235  786188 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:16:19.800966  786188 kubeadm.go:322] W0307 18:16:07.290185    1399 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:16:19.800980  786188 command_runner.go:130] ! W0307 18:16:07.290185    1399 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:16:19.801276  786188 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:16:19.801308  786188 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:16:19.801478  786188 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:16:19.801492  786188 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:16:19.801517  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:16:19.801539  786188 cni.go:136] 1 nodes found, recommending kindnet
	I0307 18:16:19.803644  786188 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 18:16:19.805244  786188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 18:16:19.809249  786188 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 18:16:19.809269  786188 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0307 18:16:19.809279  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2129263     Links: 1
	I0307 18:16:19.809288  786188 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:16:19.809296  786188 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:16:19.809303  786188 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:16:19.809310  786188 command_runner.go:130] > Change: 2023-03-07 18:01:35.631850484 +0000
	I0307 18:16:19.809328  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:19.809695  786188 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 18:16:19.809714  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 18:16:19.826959  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 18:16:20.681840  786188 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0307 18:16:20.685894  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0307 18:16:20.696585  786188 command_runner.go:130] > serviceaccount/kindnet created
	I0307 18:16:20.704387  786188 command_runner.go:130] > daemonset.apps/kindnet created
	I0307 18:16:20.708153  786188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 18:16:20.708224  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.708268  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1 minikube.k8s.io/name=multinode-242095 minikube.k8s.io/updated_at=2023_03_07T18_16_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.801587  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0307 18:16:20.806057  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.811657  786188 command_runner.go:130] > node/multinode-242095 labeled
	I0307 18:16:20.814348  786188 command_runner.go:130] > -16
	I0307 18:16:20.814384  786188 ops.go:34] apiserver oom_adj: -16
	I0307 18:16:20.866875  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:21.370044  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:21.429400  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:21.870358  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:21.932610  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:22.370244  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:22.428897  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:22.869838  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:22.929182  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:23.370079  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:23.430666  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:23.870324  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:23.931572  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:24.369475  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:24.430078  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:24.869626  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:24.932327  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:25.370177  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:25.432463  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:25.870496  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:25.930432  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:26.370423  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:26.430173  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:26.869462  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:26.931621  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:27.370290  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:27.431618  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:27.870269  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:27.929680  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:28.369625  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:28.431090  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:28.869658  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:28.932242  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:29.369779  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:29.430964  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:29.869529  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:29.927813  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:30.370224  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:30.431125  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:30.869835  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:30.932913  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:31.369514  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:31.434726  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:31.870349  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:31.933361  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:32.369992  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:32.428660  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:32.869680  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:33.004581  786188 command_runner.go:130] > NAME      SECRETS   AGE
	I0307 18:16:33.004606  786188 command_runner.go:130] > default   0         1s
	I0307 18:16:33.007272  786188 kubeadm.go:1073] duration metric: took 12.299093044s to wait for elevateKubeSystemPrivileges.
	I0307 18:16:33.007306  786188 kubeadm.go:403] StartCluster complete in 25.789200932s
	I0307 18:16:33.007327  786188 settings.go:142] acquiring lock: {Name:mk20aadaac3bdeaefa078eca20fd3af7c7410f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:33.007417  786188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.008372  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/kubeconfig: {Name:mk9b5454025117fb515bc2f65b05f28b0fa10239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:33.008680  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 18:16:33.008762  786188 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0307 18:16:33.008951  786188 addons.go:66] Setting storage-provisioner=true in profile "multinode-242095"
	I0307 18:16:33.008955  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:33.008980  786188 addons.go:228] Setting addon storage-provisioner=true in "multinode-242095"
	I0307 18:16:33.009008  786188 addons.go:66] Setting default-storageclass=true in profile "multinode-242095"
	I0307 18:16:33.009040  786188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-242095"
	I0307 18:16:33.009048  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:16:33.009051  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.009416  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.009373  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:33.009591  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.010656  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.010675  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.010688  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.010699  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.010933  786188 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 18:16:33.020139  786188 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 18:16:33.020164  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.020175  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.020185  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.020195  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.020206  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.020219  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.020229  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.020242  786188 round_trippers.go:580]     Audit-Id: 18dae1a0-98d0-4016-8534-39090f93c347
	I0307 18:16:33.020275  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"314","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.020828  786188 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"314","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.020889  786188 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.020901  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.020913  786188 round_trippers.go:473]     Content-Type: application/json
	I0307 18:16:33.020923  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.020938  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.027536  786188 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 18:16:33.027560  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.027569  786188 round_trippers.go:580]     Audit-Id: 5ed54bf9-85de-475e-8220-d39207ace3fb
	I0307 18:16:33.027577  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.027585  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.027593  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.027602  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.027609  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.027617  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.027648  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"347","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.093612  786188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:33.092444  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.095725  786188 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:16:33.095748  786188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 18:16:33.095805  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:33.095928  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:33.096437  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0307 18:16:33.096455  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.096467  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.096477  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.099415  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:33.099434  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.099469  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.099479  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.099488  786188 round_trippers.go:580]     Content-Length: 109
	I0307 18:16:33.099495  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.099503  786188 round_trippers.go:580]     Audit-Id: 1710ddc7-6dd6-4d3f-8d35-374c5c4c9459
	I0307 18:16:33.099512  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.099519  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.099543  786188 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"356"},"items":[]}
	I0307 18:16:33.099908  786188 addons.go:228] Setting addon default-storageclass=true in "multinode-242095"
	I0307 18:16:33.099950  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:16:33.100356  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.130155  786188 command_runner.go:130] > apiVersion: v1
	I0307 18:16:33.130178  786188 command_runner.go:130] > data:
	I0307 18:16:33.130185  786188 command_runner.go:130] >   Corefile: |
	I0307 18:16:33.130191  786188 command_runner.go:130] >     .:53 {
	I0307 18:16:33.130196  786188 command_runner.go:130] >         errors
	I0307 18:16:33.130210  786188 command_runner.go:130] >         health {
	I0307 18:16:33.130217  786188 command_runner.go:130] >            lameduck 5s
	I0307 18:16:33.130223  786188 command_runner.go:130] >         }
	I0307 18:16:33.130229  786188 command_runner.go:130] >         ready
	I0307 18:16:33.130239  786188 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0307 18:16:33.130249  786188 command_runner.go:130] >            pods insecure
	I0307 18:16:33.130257  786188 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0307 18:16:33.130268  786188 command_runner.go:130] >            ttl 30
	I0307 18:16:33.130280  786188 command_runner.go:130] >         }
	I0307 18:16:33.130289  786188 command_runner.go:130] >         prometheus :9153
	I0307 18:16:33.130297  786188 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0307 18:16:33.130308  786188 command_runner.go:130] >            max_concurrent 1000
	I0307 18:16:33.130317  786188 command_runner.go:130] >         }
	I0307 18:16:33.130323  786188 command_runner.go:130] >         cache 30
	I0307 18:16:33.130337  786188 command_runner.go:130] >         loop
	I0307 18:16:33.130343  786188 command_runner.go:130] >         reload
	I0307 18:16:33.130353  786188 command_runner.go:130] >         loadbalance
	I0307 18:16:33.130358  786188 command_runner.go:130] >     }
	I0307 18:16:33.130368  786188 command_runner.go:130] > kind: ConfigMap
	I0307 18:16:33.130374  786188 command_runner.go:130] > metadata:
	I0307 18:16:33.130387  786188 command_runner.go:130] >   creationTimestamp: "2023-03-07T18:16:19Z"
	I0307 18:16:33.130393  786188 command_runner.go:130] >   name: coredns
	I0307 18:16:33.130400  786188 command_runner.go:130] >   namespace: kube-system
	I0307 18:16:33.130413  786188 command_runner.go:130] >   resourceVersion: "227"
	I0307 18:16:33.130421  786188 command_runner.go:130] >   uid: 003916ba-54b9-48a3-a139-d67b66c9e19a
	I0307 18:16:33.132903  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 18:16:33.173017  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:33.179700  786188 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 18:16:33.179723  786188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 18:16:33.179767  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:33.256040  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:33.309654  786188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:16:33.414125  786188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:16:33.528413  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.528435  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.528446  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.528454  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.531099  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:33.531126  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.531136  786188 round_trippers.go:580]     Audit-Id: be9eea0f-beb3-4372-bb4f-6ddd8e760d4a
	I0307 18:16:33.531144  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.531153  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.531162  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.531171  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.531183  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.531192  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.531216  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"357","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.531328  786188 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-242095" context rescaled to 1 replicas
	I0307 18:16:33.531362  786188 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:16:33.533721  786188 out.go:177] * Verifying Kubernetes components...
	I0307 18:16:33.535180  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:16:33.598827  786188 command_runner.go:130] > configmap/coredns replaced
	I0307 18:16:33.604492  786188 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0307 18:16:33.902262  786188 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0307 18:16:33.908095  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0307 18:16:33.918986  786188 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0307 18:16:33.924506  786188 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0307 18:16:33.929633  786188 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0307 18:16:33.937653  786188 command_runner.go:130] > pod/storage-provisioner created
	I0307 18:16:34.013590  786188 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0307 18:16:34.020473  786188 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 18:16:34.019024  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:34.022000  786188 addons.go:499] enable addons completed in 1.013235364s: enabled=[storage-provisioner default-storageclass]
	I0307 18:16:34.022312  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:34.022658  786188 node_ready.go:35] waiting up to 6m0s for node "multinode-242095" to be "Ready" ...
	I0307 18:16:34.022744  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:34.022754  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.022766  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.022780  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.024816  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:34.024838  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.024853  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.024861  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.024873  786188 round_trippers.go:580]     Audit-Id: 95988b2b-cc48-4f86-a80c-56c62fcb222d
	I0307 18:16:34.024881  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.024892  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.024901  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.025017  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:34.025791  786188 node_ready.go:49] node "multinode-242095" has status "Ready":"True"
	I0307 18:16:34.025812  786188 node_ready.go:38] duration metric: took 3.135062ms waiting for node "multinode-242095" to be "Ready" ...
	I0307 18:16:34.025824  786188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:16:34.025898  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:34.025908  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.025920  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.025933  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.032139  786188 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 18:16:34.032159  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.032170  786188 round_trippers.go:580]     Audit-Id: 127ad593-3e87-420f-9e8c-2f29b883dc26
	I0307 18:16:34.032179  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.032188  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.032194  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.032202  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.032210  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.032688  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"371"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"315","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54043 chars]
	I0307 18:16:34.036057  786188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:34.036117  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:34.036125  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.036132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.036141  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.037717  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:34.037732  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.037739  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.037746  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.037753  786188 round_trippers.go:580]     Audit-Id: 96f11db6-fe5f-45fb-83cd-a27fc7dfd3c0
	I0307 18:16:34.037761  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.037769  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.037782  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.037877  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"315","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4942 chars]
	I0307 18:16:34.539046  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:34.539071  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.539083  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.539093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.541551  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:34.541570  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.541580  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.541589  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.541598  786188 round_trippers.go:580]     Audit-Id: 6c2a6e23-98b8-4b4b-a5c3-b1c6d19f0546
	I0307 18:16:34.541607  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.541617  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.541629  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.541755  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:34.542405  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:34.542424  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.542435  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.542445  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.592850  786188 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0307 18:16:34.592874  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.592885  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.592894  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.592908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.592919  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.592926  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.592941  786188 round_trippers.go:580]     Audit-Id: 930dcf65-da60-4cf4-871f-a2ef96d7130c
	I0307 18:16:34.593061  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:35.038941  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:35.038970  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.038983  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.038995  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.092510  786188 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0307 18:16:35.092619  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.092638  786188 round_trippers.go:580]     Audit-Id: 75656a56-97ec-41bc-994b-eabeeca6ae54
	I0307 18:16:35.092647  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.092660  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.092680  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.092690  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.092701  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.092899  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:35.093506  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:35.093524  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.093535  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.093543  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.095797  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:35.095818  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.095828  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.095837  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.095847  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.095859  786188 round_trippers.go:580]     Audit-Id: e8ef9595-626c-4b84-a62e-b336fca45f15
	I0307 18:16:35.095871  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.095880  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.096236  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:35.538805  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:35.538829  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.538842  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.538851  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.540906  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:35.540963  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.540983  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.540996  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.541008  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.541032  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.541048  786188 round_trippers.go:580]     Audit-Id: 65fbf7ef-d842-45d1-8ada-bf75b6b08a8e
	I0307 18:16:35.541060  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.541174  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:35.541688  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:35.541736  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.541755  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.541772  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.543245  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:35.543290  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.543301  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.543316  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.543322  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.543331  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.543337  786188 round_trippers.go:580]     Audit-Id: 6ff30fe4-67ec-479b-abbc-c1a5100e8bd6
	I0307 18:16:35.543344  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.543426  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:36.039017  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:36.039036  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.039044  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.039050  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.041413  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:36.041439  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.041451  786188 round_trippers.go:580]     Audit-Id: f316d885-530a-4fd5-9378-d6e39f7715c2
	I0307 18:16:36.041463  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.041471  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.041480  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.041485  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.041493  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.041594  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:36.042063  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:36.042076  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.042083  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.042089  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.043773  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:36.043797  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.043808  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.043815  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.043823  786188 round_trippers.go:580]     Audit-Id: 87c297f8-2b38-4aa4-8e80-f9ff3455c0a7
	I0307 18:16:36.043829  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.043839  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.043852  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.043945  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:36.044247  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:36.538556  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:36.538579  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.538588  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.538595  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.540695  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:36.540724  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.540737  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.540747  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.540769  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.540782  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.540793  786188 round_trippers.go:580]     Audit-Id: ad51122b-95e6-495c-a09c-8d5cce9b45c1
	I0307 18:16:36.540802  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.540949  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:36.541459  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:36.541473  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.541481  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.541494  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.543218  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:36.543237  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.543244  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.543250  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.543256  786188 round_trippers.go:580]     Audit-Id: bd9810a4-0997-490e-9136-2b0fb74693b2
	I0307 18:16:36.543261  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.543266  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.543272  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.543382  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:37.039099  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:37.039120  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.039132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.039141  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.041110  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.041131  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.041141  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.041150  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.041158  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.041167  786188 round_trippers.go:580]     Audit-Id: 43aef3df-a06c-4af7-b96c-1b8913715a11
	I0307 18:16:37.041177  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.041183  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.041320  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:37.041883  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:37.041897  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.041905  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.041911  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.043476  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.043494  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.043502  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.043507  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.043512  786188 round_trippers.go:580]     Audit-Id: 371242cf-4c94-4474-81c3-4c1e580aac8d
	I0307 18:16:37.043517  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.043522  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.043528  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.043668  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:37.539318  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:37.539339  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.539347  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.539353  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.541278  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.541304  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.541315  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.541323  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.541330  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.541339  786188 round_trippers.go:580]     Audit-Id: 1ebb83bb-ca94-4695-bb9a-0fc47eb523f5
	I0307 18:16:37.541348  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.541359  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.541471  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:37.541930  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:37.541945  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.541955  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.541964  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.543553  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.543577  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.543587  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.543596  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.543606  786188 round_trippers.go:580]     Audit-Id: 0f5de3e2-00f5-4285-afcd-791bf9969772
	I0307 18:16:37.543615  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.543629  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.543642  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.543721  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:38.039380  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:38.039399  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.039407  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.039414  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.041580  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:38.041600  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.041608  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.041618  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.041627  786188 round_trippers.go:580]     Audit-Id: 21b9ff99-dd20-4696-81bf-da2a37d2b427
	I0307 18:16:38.041635  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.041642  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.041651  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.041794  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:38.042235  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:38.042247  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.042254  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.042260  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.043906  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:38.043928  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.043938  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.043947  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.043953  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.043961  786188 round_trippers.go:580]     Audit-Id: 56ce60d0-3eeb-4764-ab92-416ca1dcdc4d
	I0307 18:16:38.043974  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.043984  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.044086  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:38.044391  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:38.538648  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:38.538679  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.538687  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.538694  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.540901  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:38.540921  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.540928  786188 round_trippers.go:580]     Audit-Id: 0e1e7500-e531-40b1-9878-ea3ed632cfad
	I0307 18:16:38.540934  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.540939  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.540947  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.540952  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.540958  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.541130  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:38.541616  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:38.541629  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.541637  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.541642  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.543494  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:38.543510  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.543517  786188 round_trippers.go:580]     Audit-Id: 86a81d6a-bd41-4ec0-8b9e-1010c9bd0054
	I0307 18:16:38.543522  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.543527  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.543533  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.543538  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.543543  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.543691  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:39.039396  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:39.039415  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.039424  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.039430  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.041533  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:39.041554  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.041564  786188 round_trippers.go:580]     Audit-Id: d27a8536-1bb4-415d-811e-e800bfd341cc
	I0307 18:16:39.041571  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.041579  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.041588  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.041597  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.041608  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.041720  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:39.042160  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:39.042175  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.042185  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.042194  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.043832  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:39.043853  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.043863  786188 round_trippers.go:580]     Audit-Id: 46c2e535-925f-4a18-b132-a60ef44d418a
	I0307 18:16:39.043874  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.043887  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.043896  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.043908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.043920  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.043998  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:39.539416  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:39.539436  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.539457  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.539463  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.541530  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:39.541550  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.541558  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.541563  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.541568  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.541574  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.541579  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.541584  786188 round_trippers.go:580]     Audit-Id: 3fbc3dea-be1d-43c9-a388-a91c3deee502
	I0307 18:16:39.541691  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:39.542141  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:39.542154  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.542161  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.542167  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.543812  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:39.543837  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.543848  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.543872  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.543883  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.543901  786188 round_trippers.go:580]     Audit-Id: c0f988dd-97c0-42e1-bf1b-5f1cacdd29c5
	I0307 18:16:39.543910  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.543922  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.544058  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:40.038850  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:40.038877  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.038890  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.038904  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.041547  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:40.041574  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.041585  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.041594  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.041602  786188 round_trippers.go:580]     Audit-Id: 4fe7b7f9-ee66-4c02-9404-98c1deada2fb
	I0307 18:16:40.041648  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.041665  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.041677  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.041832  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:40.042421  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:40.042437  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.042449  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.042459  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.044460  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:40.044480  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.044489  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.044498  786188 round_trippers.go:580]     Audit-Id: aadfb225-8e4b-40b0-97ef-d8643516152e
	I0307 18:16:40.044511  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.044523  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.044535  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.044544  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.044640  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:40.044986  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:40.538862  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:40.538884  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.538892  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.538898  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.541026  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:40.541049  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.541060  786188 round_trippers.go:580]     Audit-Id: b63003f7-b641-43cb-a2bc-c412ced88bfe
	I0307 18:16:40.541070  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.541079  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.541089  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.541095  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.541103  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.541233  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:40.541666  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:40.541677  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.541686  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.541692  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.543485  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:40.543507  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.543518  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.543524  786188 round_trippers.go:580]     Audit-Id: 005c40f7-ef5f-45fe-967d-830436f2e2d3
	I0307 18:16:40.543533  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.543546  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.543563  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.543575  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.543666  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:41.039191  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:41.039210  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.039218  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.039224  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.041773  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:41.041799  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.041811  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.041821  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.041830  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.041839  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.041847  786188 round_trippers.go:580]     Audit-Id: 1b9ae6b0-86a8-47ba-9cfa-c91468a20ab0
	I0307 18:16:41.041859  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.041974  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:41.042551  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:41.042567  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.042577  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.042587  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.044251  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:41.044272  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.044283  786188 round_trippers.go:580]     Audit-Id: 57ffdca1-4313-4841-aa1f-2c8a948d2d7a
	I0307 18:16:41.044293  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.044302  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.044313  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.044324  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.044337  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.044440  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:41.539116  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:41.539141  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.539153  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.539163  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.541492  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:41.541515  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.541526  786188 round_trippers.go:580]     Audit-Id: 34e58c83-0a75-4c9d-bbe3-6941e0ac5926
	I0307 18:16:41.541533  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.541538  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.541544  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.541549  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.541566  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.541687  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:41.542199  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:41.542213  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.542220  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.542226  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.544216  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:41.544242  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.544259  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.544269  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.544278  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.544295  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.544306  786188 round_trippers.go:580]     Audit-Id: 1406e5b4-7587-489a-9564-f4c01cd42c72
	I0307 18:16:41.544315  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.544434  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:42.039046  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:42.039069  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.039077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.039083  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.041918  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.041943  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.041952  786188 round_trippers.go:580]     Audit-Id: 09cae67b-5d59-498f-9f3a-5f2aa9854097
	I0307 18:16:42.041958  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.041967  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.041979  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.041985  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.041991  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.042125  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:42.042621  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:42.042638  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.042645  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.042652  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.044736  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.044761  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.044772  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.044780  786188 round_trippers.go:580]     Audit-Id: 2210cdb7-4537-4678-87c1-10faba987678
	I0307 18:16:42.044794  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.044805  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.044814  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.044826  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.044912  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:42.045234  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:42.538501  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:42.538521  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.538530  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.538536  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.540698  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.540720  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.540730  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.540738  786188 round_trippers.go:580]     Audit-Id: 6ecd4dd9-6bc5-48e5-a213-a5322408d9ef
	I0307 18:16:42.540747  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.540759  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.540769  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.540784  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.540923  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:42.541376  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:42.541391  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.541401  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.541409  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.543175  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:42.543198  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.543209  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.543219  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.543232  786188 round_trippers.go:580]     Audit-Id: 2f26f49c-62e9-4167-b2a5-aba25eb53a8e
	I0307 18:16:42.543241  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.543254  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.543271  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.543373  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:43.039276  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:43.039302  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.039312  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.039320  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.041703  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:43.041727  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.041737  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.041747  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.041756  786188 round_trippers.go:580]     Audit-Id: 1591d2ff-79a9-42ca-83a9-1c79298fa1fd
	I0307 18:16:43.041765  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.041774  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.041782  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.041990  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:43.042596  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:43.042613  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.042620  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.042626  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.044545  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:43.044564  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.044571  786188 round_trippers.go:580]     Audit-Id: cb2c3ecd-00e8-442e-99a2-674ce54494e5
	I0307 18:16:43.044578  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.044586  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.044595  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.044604  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.044614  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.044716  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:43.539401  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:43.539422  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.539430  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.539437  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.541635  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:43.541659  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.541670  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.541680  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.541687  786188 round_trippers.go:580]     Audit-Id: 663a5b66-807d-442f-8515-f3c013b511c5
	I0307 18:16:43.541692  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.541701  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.541706  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.541816  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:43.542261  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:43.542272  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.542279  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.542285  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.543978  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:43.544000  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.544011  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.544019  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.544028  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.544044  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.544054  786188 round_trippers.go:580]     Audit-Id: aff46b5a-2d10-4afe-a3ef-4ca592c814cf
	I0307 18:16:43.544063  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.544152  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.038525  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:44.038549  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.038557  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.038563  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.041679  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:44.041715  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.041727  786188 round_trippers.go:580]     Audit-Id: 30e5b258-5981-4115-aa0c-d7db74b0c671
	I0307 18:16:44.041737  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.041750  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.041762  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.041770  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.041789  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.041924  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:44.042566  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:44.042585  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.042597  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.042607  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.044455  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:44.044476  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.044487  786188 round_trippers.go:580]     Audit-Id: 061059bc-5a15-4f0c-8f0d-fba489414563
	I0307 18:16:44.044499  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.044508  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.044516  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.044529  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.044542  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.044638  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.539243  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:44.539264  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.539276  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.539284  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.542333  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:44.542361  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.542373  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.542382  786188 round_trippers.go:580]     Audit-Id: fe0fcfdc-0b3d-443b-97bc-47bc4bb21ac1
	I0307 18:16:44.542391  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.542400  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.542409  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.542422  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.542575  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:44.543223  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:44.543243  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.543255  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.543264  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.545242  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:44.545263  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.545272  786188 round_trippers.go:580]     Audit-Id: 8f5a948e-bb7d-429d-ba18-9eb4449c7726
	I0307 18:16:44.545282  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.545291  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.545300  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.545313  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.545325  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.545421  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.545812  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:45.038964  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:45.038997  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.039006  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.039012  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.041416  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.041441  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.041453  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.041463  786188 round_trippers.go:580]     Audit-Id: a4c0a5c3-7382-4925-9172-fb42b4bc46f8
	I0307 18:16:45.041476  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.041485  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.041495  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.041504  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.041655  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:45.042281  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:45.042384  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.042410  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.042423  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.044442  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.044464  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.044480  786188 round_trippers.go:580]     Audit-Id: 0e04065e-c13f-4508-9dab-3cd28224a402
	I0307 18:16:45.044494  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.044503  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.044512  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.044524  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.044533  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.044612  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:45.539308  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:45.539335  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.539348  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.539359  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.541728  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.541756  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.541768  786188 round_trippers.go:580]     Audit-Id: 68a8f2d5-6651-4ddb-8fb1-1afd9a80eb67
	I0307 18:16:45.541777  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.541786  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.541794  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.541803  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.541818  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.541972  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:45.542602  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:45.542620  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.542632  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.542642  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.544632  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:45.544654  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.544664  786188 round_trippers.go:580]     Audit-Id: 7ce41dea-e316-4dfd-b933-bdb780c67575
	I0307 18:16:45.544674  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.544685  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.544693  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.544703  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.544715  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.544807  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:46.039433  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:46.039472  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.039484  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.039493  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.041854  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:46.041879  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.041890  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.041900  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.041908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.041916  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.041926  786188 round_trippers.go:580]     Audit-Id: 0a85ac51-7161-4322-ac85-3704bc133b0f
	I0307 18:16:46.041934  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.042053  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:46.042528  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:46.042544  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.042551  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.042557  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.044475  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:46.044495  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.044505  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.044514  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.044527  786188 round_trippers.go:580]     Audit-Id: a68b33cd-ec77-4bd5-9781-f8a65738b7a5
	I0307 18:16:46.044540  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.044553  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.044565  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.044652  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:46.539175  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:46.539196  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.539204  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.539210  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.541558  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:46.541584  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.541594  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.541603  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.541612  786188 round_trippers.go:580]     Audit-Id: fa847503-669d-4969-b6b6-f5bf0611aef3
	I0307 18:16:46.541625  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.541633  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.541646  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.541765  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:46.542251  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:46.542272  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.542279  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.542285  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.544285  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:46.544304  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.544312  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.544317  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.544324  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.544329  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.544334  786188 round_trippers.go:580]     Audit-Id: 453766dc-0cd9-4548-ac7c-306860d30ad8
	I0307 18:16:46.544343  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.544418  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:47.038684  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:47.038706  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.038714  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.038723  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.041140  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.041166  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.041177  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.041186  786188 round_trippers.go:580]     Audit-Id: f92c9990-7f4b-4b4b-adb4-9406cde5fe8d
	I0307 18:16:47.041195  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.041205  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.041214  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.041223  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.041350  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:47.041941  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:47.041958  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.041969  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.041980  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.044059  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.044094  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.044108  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.044119  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.044128  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.044145  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.044153  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.044161  786188 round_trippers.go:580]     Audit-Id: 0feed19c-98ea-4ed3-a16d-c190f24d4149
	I0307 18:16:47.044257  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:47.044552  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:47.538830  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:47.538857  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.538870  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.538880  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.541255  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.541281  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.541292  786188 round_trippers.go:580]     Audit-Id: b79a5abf-8ae5-44f0-87bc-d37d875a4f13
	I0307 18:16:47.541301  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.541315  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.541325  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.541339  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.541351  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.541491  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:47.542156  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:47.542176  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.542188  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.542204  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.544022  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:47.544047  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.544059  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.544069  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.544079  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.544092  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.544105  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.544118  786188 round_trippers.go:580]     Audit-Id: 7ab8337a-94bc-4c94-8a32-ccd298b625b8
	I0307 18:16:47.544230  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:48.038660  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:48.038680  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.038688  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.038695  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.040939  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:48.040964  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.040972  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.040978  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.040988  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.040993  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.040999  786188 round_trippers.go:580]     Audit-Id: 3a1bc639-b1c3-46e8-a833-e8bc1ce08d19
	I0307 18:16:48.041005  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.041136  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:48.041556  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:48.041566  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.041573  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.041579  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.043426  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:48.043470  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.043481  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.043494  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.043507  786188 round_trippers.go:580]     Audit-Id: 9f7b7e61-6ef6-4227-a75b-027c76be73a3
	I0307 18:16:48.043517  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.043526  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.043543  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.043650  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:48.539326  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:48.539354  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.539365  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.539375  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.541854  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:48.541879  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.541889  786188 round_trippers.go:580]     Audit-Id: 864837ff-53a7-4433-aa6e-d4c9f930f2d0
	I0307 18:16:48.541896  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.541905  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.541919  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.541932  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.541944  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.542055  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:48.542552  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:48.542565  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.542572  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.542580  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.544513  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:48.544535  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.544546  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.544555  786188 round_trippers.go:580]     Audit-Id: c30f4cdc-aa33-474b-b0db-9973aaa44d24
	I0307 18:16:48.544564  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.544577  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.544590  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.544603  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.544704  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.039298  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:49.039319  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.039327  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.039334  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.041443  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.041472  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.041483  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.041491  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.041499  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.041508  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.041518  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.041532  786188 round_trippers.go:580]     Audit-Id: a685f77e-6956-4862-b93f-2a81f21a6fdb
	I0307 18:16:49.041692  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:49.042163  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.042178  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.042185  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.042191  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.043884  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.043902  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.043909  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.043914  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.043920  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.043928  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.043936  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.043948  786188 round_trippers.go:580]     Audit-Id: d454af41-3b81-4a02-b271-c24acd9134c0
	I0307 18:16:49.044021  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.538626  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:49.538652  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.538663  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.538671  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.540878  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.540901  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.540909  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.540915  786188 round_trippers.go:580]     Audit-Id: e9c8658f-eca8-4ae5-a1b7-d39c054acd3c
	I0307 18:16:49.540921  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.540930  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.540939  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.540951  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.541100  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I0307 18:16:49.541578  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.541595  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.541602  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.541608  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.543469  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.543490  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.543500  786188 round_trippers.go:580]     Audit-Id: 2395a8f3-e518-41dd-9763-2c672cd86c7d
	I0307 18:16:49.543509  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.543522  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.543532  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.543545  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.543557  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.543651  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.544022  786188 pod_ready.go:92] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.544046  786188 pod_ready.go:81] duration metric: took 15.507969037s waiting for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.544059  786188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.544111  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-242095
	I0307 18:16:49.544120  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.544132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.544143  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.545761  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.545784  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.545798  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.545808  786188 round_trippers.go:580]     Audit-Id: 54794421-d181-48ac-aab6-70ba00969f87
	I0307 18:16:49.545821  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.545829  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.545842  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.545854  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.545947  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-242095","namespace":"kube-system","uid":"58a90a44-38a6-4150-b6a5-d68e1257f6f3","resourceVersion":"286","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.mirror":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.seen":"2023-03-07T18:16:19.703879850Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0307 18:16:49.546319  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.546331  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.546338  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.546344  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.547861  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.547882  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.547892  786188 round_trippers.go:580]     Audit-Id: a2a1d732-596c-4a09-9041-2423d5dfe69d
	I0307 18:16:49.547901  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.547910  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.547924  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.547930  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.547938  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.548017  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.548258  786188 pod_ready.go:92] pod "etcd-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.548267  786188 pod_ready.go:81] duration metric: took 4.199405ms waiting for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.548277  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.548313  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-242095
	I0307 18:16:49.548321  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.548328  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.548335  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.549751  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.549771  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.549781  786188 round_trippers.go:580]     Audit-Id: 31b988a0-e19a-4a4a-9210-ef791639f50e
	I0307 18:16:49.549789  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.549799  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.549812  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.549823  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.549830  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.549928  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-242095","namespace":"kube-system","uid":"17d64e05-257c-45b2-bec2-6b363cbfb788","resourceVersion":"293","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.mirror":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.seen":"2023-03-07T18:16:19.703896620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0307 18:16:49.550276  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.550286  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.550293  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.550299  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.551830  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.551846  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.551853  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.551858  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.551863  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.551869  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.551878  786188 round_trippers.go:580]     Audit-Id: 6ebf0e78-f622-48f1-985b-a745502de4f5
	I0307 18:16:49.551888  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.551988  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.552234  786188 pod_ready.go:92] pod "kube-apiserver-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.552244  786188 pod_ready.go:81] duration metric: took 3.961927ms waiting for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.552252  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.552294  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-242095
	I0307 18:16:49.552306  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.552313  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.552319  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.553790  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.553814  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.553824  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.553834  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.553842  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.553856  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.553862  786188 round_trippers.go:580]     Audit-Id: 58e08f57-1904-48f8-88b7-578a6c1ffe50
	I0307 18:16:49.553867  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.553952  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-242095","namespace":"kube-system","uid":"536246ee-9384-411a-bd3a-a3f3862a51bc","resourceVersion":"291","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.mirror":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.seen":"2023-03-07T18:16:19.703897932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0307 18:16:49.554303  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.554313  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.554320  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.554326  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.555536  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.555557  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.555568  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.555577  786188 round_trippers.go:580]     Audit-Id: 5b30a3de-20f2-4e12-96c8-8bd1e1a41b54
	I0307 18:16:49.555590  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.555602  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.555615  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.555637  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.555723  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.555975  786188 pod_ready.go:92] pod "kube-controller-manager-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.555986  786188 pod_ready.go:81] duration metric: took 3.729298ms waiting for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.555993  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.556030  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:16:49.556037  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.556043  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.556050  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.557406  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.557429  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.557439  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.557448  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.557458  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.557470  786188 round_trippers.go:580]     Audit-Id: 4e917739-9a9c-4dc1-b8c4-3a0ca6408ea4
	I0307 18:16:49.557480  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.557493  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.557605  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rjsmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"c20d9dc5-69a3-46f9-bdd7-7a54def58eac","resourceVersion":"382","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0307 18:16:49.557945  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.557956  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.557963  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.557974  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.559337  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.559358  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.559368  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.559377  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.559390  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.559410  786188 round_trippers.go:580]     Audit-Id: 106d7541-bfed-4e3d-8aaa-483c2ab73fbf
	I0307 18:16:49.559419  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.559427  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.559531  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.559764  786188 pod_ready.go:92] pod "kube-proxy-rjsmj" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.559777  786188 pod_ready.go:81] duration metric: took 3.777889ms waiting for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.559786  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.739167  786188 request.go:622] Waited for 179.320938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:16:49.739221  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:16:49.739226  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.739234  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.739244  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.741119  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.741146  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.741157  786188 round_trippers.go:580]     Audit-Id: 06997025-cfb8-4b53-b1ba-2f1d1ce94405
	I0307 18:16:49.741163  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.741169  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.741178  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.741184  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.741192  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.741282  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-242095","namespace":"kube-system","uid":"bd31dd93-d9b4-4f7a-9d31-d15d68702789","resourceVersion":"282","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.mirror":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.seen":"2023-03-07T18:16:19.703898726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0307 18:16:49.938682  786188 request.go:622] Waited for 196.990276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.938754  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.938759  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.938767  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.938776  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.940976  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.940994  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.941002  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.941008  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.941020  786188 round_trippers.go:580]     Audit-Id: 98b81c03-a2ad-45a5-a79c-c2eb53225ba9
	I0307 18:16:49.941034  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.941046  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.941057  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.941176  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.941463  786188 pod_ready.go:92] pod "kube-scheduler-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.941473  786188 pod_ready.go:81] duration metric: took 381.681393ms waiting for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.941484  786188 pod_ready.go:38] duration metric: took 15.915650238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:16:49.941506  786188 api_server.go:51] waiting for apiserver process to appear ...
	I0307 18:16:49.941544  786188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:16:49.950433  786188 command_runner.go:130] > 2099
	I0307 18:16:49.951074  786188 api_server.go:71] duration metric: took 16.419676095s to wait for apiserver process to appear ...
	I0307 18:16:49.951090  786188 api_server.go:87] waiting for apiserver healthz status ...
	I0307 18:16:49.951099  786188 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 18:16:49.955641  786188 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 18:16:49.955704  786188 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0307 18:16:49.955713  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.955721  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.955730  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.956295  786188 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 18:16:49.956312  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.956323  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.956338  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.956351  786188 round_trippers.go:580]     Content-Length: 263
	I0307 18:16:49.956364  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.956378  786188 round_trippers.go:580]     Audit-Id: 511ba1cb-11e2-49d8-8b94-49c70811be91
	I0307 18:16:49.956388  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.956397  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.956421  786188 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 18:16:49.956504  786188 api_server.go:140] control plane version: v1.26.2
	I0307 18:16:49.956521  786188 api_server.go:130] duration metric: took 5.425304ms to wait for apiserver health ...
	I0307 18:16:49.956534  786188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 18:16:50.138945  786188 request.go:622] Waited for 182.324216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.138996  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.139001  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.139008  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.139015  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.141960  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:50.141988  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.142000  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.142009  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.142017  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.142026  786188 round_trippers.go:580]     Audit-Id: 6f9f4ad5-f828-4996-8973-46c1a9a9c095
	I0307 18:16:50.142036  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.142049  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.143176  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55540 chars]
	I0307 18:16:50.145636  786188 system_pods.go:59] 8 kube-system pods found
	I0307 18:16:50.145662  786188 system_pods.go:61] "coredns-787d4945fb-fsll9" [17db7207-f2ce-4566-85fc-dc7e0eb65d09] Running
	I0307 18:16:50.145667  786188 system_pods.go:61] "etcd-multinode-242095" [58a90a44-38a6-4150-b6a5-d68e1257f6f3] Running
	I0307 18:16:50.145671  786188 system_pods.go:61] "kindnet-4sm84" [c406577e-74d2-4d81-b8a4-c827a78e2d61] Running
	I0307 18:16:50.145675  786188 system_pods.go:61] "kube-apiserver-multinode-242095" [17d64e05-257c-45b2-bec2-6b363cbfb788] Running
	I0307 18:16:50.145679  786188 system_pods.go:61] "kube-controller-manager-multinode-242095" [536246ee-9384-411a-bd3a-a3f3862a51bc] Running
	I0307 18:16:50.145683  786188 system_pods.go:61] "kube-proxy-rjsmj" [c20d9dc5-69a3-46f9-bdd7-7a54def58eac] Running
	I0307 18:16:50.145687  786188 system_pods.go:61] "kube-scheduler-multinode-242095" [bd31dd93-d9b4-4f7a-9d31-d15d68702789] Running
	I0307 18:16:50.145690  786188 system_pods.go:61] "storage-provisioner" [ea1890f3-3928-474e-8b2d-10da6a0e9f14] Running
	I0307 18:16:50.145696  786188 system_pods.go:74] duration metric: took 189.152884ms to wait for pod list to return data ...
	I0307 18:16:50.145703  786188 default_sa.go:34] waiting for default service account to be created ...
	I0307 18:16:50.339115  786188 request.go:622] Waited for 193.345838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0307 18:16:50.339183  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0307 18:16:50.339192  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.339199  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.339206  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.341266  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:50.341285  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.341293  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.341299  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.341304  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.341310  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.341316  786188 round_trippers.go:580]     Content-Length: 261
	I0307 18:16:50.341321  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.341327  786188 round_trippers.go:580]     Audit-Id: a800dba8-449d-434d-9dd2-ffe846382bf4
	I0307 18:16:50.341358  786188 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"46302e89-7181-491f-ae53-45d5e8f31c31","resourceVersion":"303","creationTimestamp":"2023-03-07T18:16:32Z"}}]}
	I0307 18:16:50.341550  786188 default_sa.go:45] found service account: "default"
	I0307 18:16:50.341562  786188 default_sa.go:55] duration metric: took 195.853934ms for default service account to be created ...
	I0307 18:16:50.341569  786188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 18:16:50.538993  786188 request.go:622] Waited for 197.357696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.539055  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.539063  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.539077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.539093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.542124  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:50.542146  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.542153  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.542159  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.542165  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.542174  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.542183  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.542192  786188 round_trippers.go:580]     Audit-Id: f143b7a5-5b3e-4c43-a610-e314527d137f
	I0307 18:16:50.542590  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55540 chars]
	I0307 18:16:50.544985  786188 system_pods.go:86] 8 kube-system pods found
	I0307 18:16:50.545011  786188 system_pods.go:89] "coredns-787d4945fb-fsll9" [17db7207-f2ce-4566-85fc-dc7e0eb65d09] Running
	I0307 18:16:50.545019  786188 system_pods.go:89] "etcd-multinode-242095" [58a90a44-38a6-4150-b6a5-d68e1257f6f3] Running
	I0307 18:16:50.545028  786188 system_pods.go:89] "kindnet-4sm84" [c406577e-74d2-4d81-b8a4-c827a78e2d61] Running
	I0307 18:16:50.545041  786188 system_pods.go:89] "kube-apiserver-multinode-242095" [17d64e05-257c-45b2-bec2-6b363cbfb788] Running
	I0307 18:16:50.545048  786188 system_pods.go:89] "kube-controller-manager-multinode-242095" [536246ee-9384-411a-bd3a-a3f3862a51bc] Running
	I0307 18:16:50.545057  786188 system_pods.go:89] "kube-proxy-rjsmj" [c20d9dc5-69a3-46f9-bdd7-7a54def58eac] Running
	I0307 18:16:50.545064  786188 system_pods.go:89] "kube-scheduler-multinode-242095" [bd31dd93-d9b4-4f7a-9d31-d15d68702789] Running
	I0307 18:16:50.545070  786188 system_pods.go:89] "storage-provisioner" [ea1890f3-3928-474e-8b2d-10da6a0e9f14] Running
	I0307 18:16:50.545082  786188 system_pods.go:126] duration metric: took 203.507042ms to wait for k8s-apps to be running ...
	I0307 18:16:50.545097  786188 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:16:50.545147  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:16:50.554639  786188 system_svc.go:56] duration metric: took 9.5373ms WaitForService to wait for kubelet.
	I0307 18:16:50.554664  786188 kubeadm.go:578] duration metric: took 17.023267126s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 18:16:50.554689  786188 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:16:50.739111  786188 request.go:622] Waited for 184.334925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0307 18:16:50.739161  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0307 18:16:50.739165  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.739174  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.739180  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.741195  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:50.741234  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.741247  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.741262  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.741269  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.741275  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.741284  786188 round_trippers.go:580]     Audit-Id: d471e32b-a91e-429d-acc8-aa5265b543e3
	I0307 18:16:50.741290  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.741429  786188 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5214 chars]
	I0307 18:16:50.741933  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:16:50.741958  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:16:50.741973  786188 node_conditions.go:105] duration metric: took 187.279803ms to run NodePressure ...
	I0307 18:16:50.741990  786188 start.go:228] waiting for startup goroutines ...
	I0307 18:16:50.742004  786188 start.go:233] waiting for cluster config update ...
	I0307 18:16:50.742021  786188 start.go:242] writing updated cluster config ...
	I0307 18:16:50.744145  786188 out.go:177] 
	I0307 18:16:50.745952  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:50.746044  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:50.747936  786188 out.go:177] * Starting worker node multinode-242095-m02 in cluster multinode-242095
	I0307 18:16:50.749260  786188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0307 18:16:50.750747  786188 out.go:177] * Pulling base image ...
	I0307 18:16:50.752517  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:50.752535  786188 cache.go:57] Caching tarball of preloaded images
	I0307 18:16:50.752545  786188 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon
	I0307 18:16:50.752624  786188 preload.go:174] Found /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 18:16:50.752639  786188 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 18:16:50.752729  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:50.815674  786188 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon, skipping pull
	I0307 18:16:50.815700  786188 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 exists in daemon, skipping load
	I0307 18:16:50.815722  786188 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:16:50.815760  786188 start.go:364] acquiring machines lock for multinode-242095-m02: {Name:mk9ddc5dde012548a60ee1487f1c4b2a77a956b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:16:50.815871  786188 start.go:368] acquired machines lock for "multinode-242095-m02" in 86.682µs
	I0307 18:16:50.815899  786188 start.go:93] Provisioning new machine with config: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:16:50.815979  786188 start.go:125] createHost starting for "m02" (driver="docker")
	I0307 18:16:50.818373  786188 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 18:16:50.818510  786188 start.go:159] libmachine.API.Create for "multinode-242095" (driver="docker")
	I0307 18:16:50.818543  786188 client.go:168] LocalClient.Create starting
	I0307 18:16:50.818621  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem
	I0307 18:16:50.818668  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:16:50.818694  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:16:50.818768  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem
	I0307 18:16:50.818795  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:16:50.818812  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:16:50.819015  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:16:50.879143  786188 network_create.go:76] Found existing network {name:multinode-242095 subnet:0xc001047b00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0307 18:16:50.879182  786188 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-242095-m02" container
	I0307 18:16:50.879237  786188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 18:16:50.941133  786188 cli_runner.go:164] Run: docker volume create multinode-242095-m02 --label name.minikube.sigs.k8s.io=multinode-242095-m02 --label created_by.minikube.sigs.k8s.io=true
	I0307 18:16:51.003964  786188 oci.go:103] Successfully created a docker volume multinode-242095-m02
	I0307 18:16:51.004065  786188 cli_runner.go:164] Run: docker run --rm --name multinode-242095-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095-m02 --entrypoint /usr/bin/test -v multinode-242095-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -d /var/lib
	I0307 18:16:51.595371  786188 oci.go:107] Successfully prepared a docker volume multinode-242095-m02
	I0307 18:16:51.595420  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:51.595465  786188 kic.go:190] Starting extracting preloaded images to volume ...
	I0307 18:16:51.595533  786188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 18:16:56.644032  786188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir: (5.048448058s)
	I0307 18:16:56.644071  786188 kic.go:199] duration metric: took 5.048601 seconds to extract preloaded images to volume
	W0307 18:16:56.644239  786188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 18:16:56.644355  786188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 18:16:56.771459  786188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-242095-m02 --name multinode-242095-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-242095-m02 --network multinode-242095 --ip 192.168.58.3 --volume multinode-242095-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9
	I0307 18:16:57.240385  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Running}}
	I0307 18:16:57.313862  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:57.385444  786188 cli_runner.go:164] Run: docker exec multinode-242095-m02 stat /var/lib/dpkg/alternatives/iptables
	I0307 18:16:57.508588  786188 oci.go:144] the created container "multinode-242095-m02" has a running status.
	I0307 18:16:57.508623  786188 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa...
	I0307 18:16:57.977272  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0307 18:16:57.977331  786188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 18:16:58.083071  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:58.148879  786188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 18:16:58.148905  786188 kic_runner.go:114] Args: [docker exec --privileged multinode-242095-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 18:16:58.259844  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:58.324538  786188 machine.go:88] provisioning docker machine ...
	I0307 18:16:58.324587  786188 ubuntu.go:169] provisioning hostname "multinode-242095-m02"
	I0307 18:16:58.324650  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.390990  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:58.391424  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:58.391438  786188 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-242095-m02 && echo "multinode-242095-m02" | sudo tee /etc/hostname
	I0307 18:16:58.515988  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-242095-m02
	
	I0307 18:16:58.516086  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.579662  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:58.580114  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:58.580133  786188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-242095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-242095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-242095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:16:58.690941  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:16:58.690975  786188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15985-636026/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-636026/.minikube}
	I0307 18:16:58.690997  786188 ubuntu.go:177] setting up certificates
	I0307 18:16:58.691009  786188 provision.go:83] configureAuth start
	I0307 18:16:58.691071  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:16:58.757503  786188 provision.go:138] copyHostCerts
	I0307 18:16:58.757549  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:58.757574  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem, removing ...
	I0307 18:16:58.757583  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:58.757641  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem (1082 bytes)
	I0307 18:16:58.757714  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:58.757732  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem, removing ...
	I0307 18:16:58.757735  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:58.757757  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem (1123 bytes)
	I0307 18:16:58.757811  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:58.757827  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem, removing ...
	I0307 18:16:58.757833  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:58.757856  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem (1679 bytes)
	I0307 18:16:58.757912  786188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem org=jenkins.multinode-242095-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-242095-m02]
	I0307 18:16:58.846079  786188 provision.go:172] copyRemoteCerts
	I0307 18:16:58.846145  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:16:58.846191  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.908801  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:16:58.990464  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 18:16:58.990517  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:16:59.007658  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 18:16:59.007729  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 18:16:59.024463  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 18:16:59.024520  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:16:59.040858  786188 provision.go:86] duration metric: configureAuth took 349.838345ms
	I0307 18:16:59.040879  786188 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:16:59.041027  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:59.041071  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.103856  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.104272  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.104285  786188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 18:16:59.215528  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0307 18:16:59.215556  786188 ubuntu.go:71] root file system type: overlay
	I0307 18:16:59.215685  786188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 18:16:59.215772  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.281374  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.281811  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.281873  786188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 18:16:59.400263  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 18:16:59.400347  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.466849  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.467335  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.467355  786188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 18:17:00.101666  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-07 18:16:59.392737492 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0307 18:17:00.101702  786188 machine.go:91] provisioned docker machine in 1.777133458s
	I0307 18:17:00.101712  786188 client.go:171] LocalClient.Create took 9.28316106s
	I0307 18:17:00.101725  786188 start.go:167] duration metric: libmachine.API.Create for "multinode-242095" took 9.283216458s
	I0307 18:17:00.101734  786188 start.go:300] post-start starting for "multinode-242095-m02" (driver="docker")
	I0307 18:17:00.101747  786188 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:17:00.101813  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:17:00.101861  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.167841  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.255549  786188 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:17:00.258236  786188 command_runner.go:130] > NAME="Ubuntu"
	I0307 18:17:00.258261  786188 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0307 18:17:00.258268  786188 command_runner.go:130] > ID=ubuntu
	I0307 18:17:00.258274  786188 command_runner.go:130] > ID_LIKE=debian
	I0307 18:17:00.258280  786188 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0307 18:17:00.258285  786188 command_runner.go:130] > VERSION_ID="20.04"
	I0307 18:17:00.258292  786188 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0307 18:17:00.258296  786188 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0307 18:17:00.258301  786188 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0307 18:17:00.258309  786188 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0307 18:17:00.258315  786188 command_runner.go:130] > VERSION_CODENAME=focal
	I0307 18:17:00.258322  786188 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0307 18:17:00.258388  786188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:17:00.258405  786188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:17:00.258416  786188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:17:00.258424  786188 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0307 18:17:00.258438  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/addons for local assets ...
	I0307 18:17:00.258493  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/files for local assets ...
	I0307 18:17:00.258582  786188 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> 6427432.pem in /etc/ssl/certs
	I0307 18:17:00.258594  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /etc/ssl/certs/6427432.pem
	I0307 18:17:00.258696  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:17:00.265462  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:17:00.283079  786188 start.go:303] post-start completed in 181.327543ms
	I0307 18:17:00.283382  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:17:00.347727  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:17:00.347971  786188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:17:00.348012  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.412592  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.491458  786188 command_runner.go:130] > 17%!
	(MISSING)I0307 18:17:00.491740  786188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:17:00.495498  786188 command_runner.go:130] > 244G
	I0307 18:17:00.495529  786188 start.go:128] duration metric: createHost completed in 9.679540684s
	I0307 18:17:00.495538  786188 start.go:83] releasing machines lock for "multinode-242095-m02", held for 9.679655304s
	I0307 18:17:00.495609  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:17:00.561381  786188 out.go:177] * Found network options:
	I0307 18:17:00.563029  786188 out.go:177]   - NO_PROXY=192.168.58.2
	W0307 18:17:00.564555  786188 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 18:17:00.564594  786188 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 18:17:00.564670  786188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:17:00.564708  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.564745  786188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:17:00.564794  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.634283  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.634283  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.751205  786188 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 18:17:00.752530  786188 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0307 18:17:00.752555  786188 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0307 18:17:00.752564  786188 command_runner.go:130] > Device: e3h/227d	Inode: 2131168     Links: 1
	I0307 18:17:00.752577  786188 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:17:00.752589  786188 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:17:00.752598  786188 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:17:00.752609  786188 command_runner.go:130] > Change: 2023-03-07 18:01:36.367924495 +0000
	I0307 18:17:00.752617  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:00.752684  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:17:00.773083  786188 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:17:00.773155  786188 ssh_runner.go:195] Run: which cri-dockerd
	I0307 18:17:00.775894  786188 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 18:17:00.776012  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 18:17:00.783049  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 18:17:00.795569  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:17:00.810723  786188 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0307 18:17:00.810749  786188 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 18:17:00.810760  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:17:00.810790  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:17:00.810888  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:17:00.822212  786188 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:17:00.822233  786188 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:17:00.822924  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:17:00.830248  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:17:00.837462  786188 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:17:00.837502  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:17:00.844663  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:17:00.851989  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:17:00.859170  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:17:00.867008  786188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:17:00.873876  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:17:00.881482  786188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:17:00.888005  786188 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 18:17:00.888066  786188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:17:00.894114  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:00.975468  786188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:17:01.057347  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:17:01.057405  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:17:01.057456  786188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 18:17:01.067139  786188 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0307 18:17:01.067232  786188 command_runner.go:130] > [Unit]
	I0307 18:17:01.067254  786188 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 18:17:01.067263  786188 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 18:17:01.067274  786188 command_runner.go:130] > BindsTo=containerd.service
	I0307 18:17:01.067284  786188 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0307 18:17:01.067294  786188 command_runner.go:130] > Wants=network-online.target
	I0307 18:17:01.067306  786188 command_runner.go:130] > Requires=docker.socket
	I0307 18:17:01.067320  786188 command_runner.go:130] > StartLimitBurst=3
	I0307 18:17:01.067331  786188 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 18:17:01.067338  786188 command_runner.go:130] > [Service]
	I0307 18:17:01.067347  786188 command_runner.go:130] > Type=notify
	I0307 18:17:01.067353  786188 command_runner.go:130] > Restart=on-failure
	I0307 18:17:01.067371  786188 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0307 18:17:01.067397  786188 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 18:17:01.067415  786188 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 18:17:01.067430  786188 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 18:17:01.067465  786188 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 18:17:01.067479  786188 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 18:17:01.067488  786188 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 18:17:01.067502  786188 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 18:17:01.067518  786188 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 18:17:01.067532  786188 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 18:17:01.067537  786188 command_runner.go:130] > ExecStart=
	I0307 18:17:01.067560  786188 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0307 18:17:01.067570  786188 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 18:17:01.067580  786188 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 18:17:01.067593  786188 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 18:17:01.067603  786188 command_runner.go:130] > LimitNOFILE=infinity
	I0307 18:17:01.067609  786188 command_runner.go:130] > LimitNPROC=infinity
	I0307 18:17:01.067616  786188 command_runner.go:130] > LimitCORE=infinity
	I0307 18:17:01.067627  786188 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 18:17:01.067638  786188 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 18:17:01.067644  786188 command_runner.go:130] > TasksMax=infinity
	I0307 18:17:01.067653  786188 command_runner.go:130] > TimeoutStartSec=0
	I0307 18:17:01.067663  786188 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 18:17:01.067673  786188 command_runner.go:130] > Delegate=yes
	I0307 18:17:01.067687  786188 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 18:17:01.067697  786188 command_runner.go:130] > KillMode=process
	I0307 18:17:01.067704  786188 command_runner.go:130] > [Install]
	I0307 18:17:01.067713  786188 command_runner.go:130] > WantedBy=multi-user.target
	I0307 18:17:01.068134  786188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0307 18:17:01.068203  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:17:01.078269  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:17:01.091176  786188 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:17:01.091199  786188 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:17:01.092364  786188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 18:17:01.202848  786188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 18:17:01.296007  786188 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 18:17:01.296044  786188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 18:17:01.310648  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:01.386967  786188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:17:01.604834  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:17:01.683772  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0307 18:17:01.683854  786188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 18:17:01.760587  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:17:01.840236  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:01.912839  786188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 18:17:01.924383  786188 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 18:17:01.924454  786188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 18:17:01.927415  786188 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 18:17:01.927437  786188 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 18:17:01.927471  786188 command_runner.go:130] > Device: ech/236d	Inode: 206         Links: 1
	I0307 18:17:01.927484  786188 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0307 18:17:01.927494  786188 command_runner.go:130] > Access: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927502  786188 command_runner.go:130] > Modify: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927511  786188 command_runner.go:130] > Change: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927515  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:01.927536  786188 start.go:553] Will wait 60s for crictl version
	I0307 18:17:01.927573  786188 ssh_runner.go:195] Run: which crictl
	I0307 18:17:01.929994  786188 command_runner.go:130] > /usr/bin/crictl
	I0307 18:17:01.930118  786188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:17:02.009002  786188 command_runner.go:130] > Version:  0.1.0
	I0307 18:17:02.009021  786188 command_runner.go:130] > RuntimeName:  docker
	I0307 18:17:02.009025  786188 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0307 18:17:02.009030  786188 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 18:17:02.009047  786188 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0307 18:17:02.009095  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:17:02.030888  786188 command_runner.go:130] > 23.0.1
	I0307 18:17:02.032020  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:17:02.054247  786188 command_runner.go:130] > 23.0.1
	I0307 18:17:02.059136  786188 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0307 18:17:02.060696  786188 out.go:177]   - env NO_PROXY=192.168.58.2
	I0307 18:17:02.062107  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:17:02.127576  786188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0307 18:17:02.130877  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:17:02.140481  786188 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095 for IP: 192.168.58.3
	I0307 18:17:02.140516  786188 certs.go:186] acquiring lock for shared ca certs: {Name:mk6aa9dfc4b93dc10fe6d5a07411d8b3adb46804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:17:02.140670  786188 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key
	I0307 18:17:02.140727  786188 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key
	I0307 18:17:02.140744  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 18:17:02.140761  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 18:17:02.140779  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 18:17:02.140796  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 18:17:02.140862  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem (1338 bytes)
	W0307 18:17:02.140908  786188 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743_empty.pem, impossibly tiny 0 bytes
	I0307 18:17:02.140922  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:17:02.140959  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:17:02.140985  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:17:02.141009  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem (1679 bytes)
	I0307 18:17:02.141050  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:17:02.141076  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem -> /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.141089  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.141101  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.141418  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:17:02.158494  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 18:17:02.175287  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:17:02.191822  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:17:02.208488  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem --> /usr/share/ca-certificates/642743.pem (1338 bytes)
	I0307 18:17:02.225034  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /usr/share/ca-certificates/6427432.pem (1708 bytes)
	I0307 18:17:02.241535  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:17:02.257607  786188 ssh_runner.go:195] Run: openssl version
	I0307 18:17:02.262425  786188 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0307 18:17:02.262532  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/642743.pem && ln -fs /usr/share/ca-certificates/642743.pem /etc/ssl/certs/642743.pem"
	I0307 18:17:02.269653  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272528  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272569  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272616  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.277140  786188 command_runner.go:130] > 51391683
	I0307 18:17:02.277325  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/642743.pem /etc/ssl/certs/51391683.0"
	I0307 18:17:02.284243  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6427432.pem && ln -fs /usr/share/ca-certificates/6427432.pem /etc/ssl/certs/6427432.pem"
	I0307 18:17:02.291082  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293812  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293909  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293944  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.298398  786188 command_runner.go:130] > 3ec20f2e
	I0307 18:17:02.298454  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6427432.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:17:02.305292  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:17:02.312208  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.314998  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.315049  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.315089  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.319291  786188 command_runner.go:130] > b5213941
	I0307 18:17:02.319466  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:17:02.326187  786188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 18:17:02.347919  786188 command_runner.go:130] > cgroupfs
	I0307 18:17:02.349252  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:17:02.349268  786188 cni.go:136] 2 nodes found, recommending kindnet
	I0307 18:17:02.349278  786188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:17:02.349300  786188 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-242095 NodeName:multinode-242095-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:17:02.349416  786188 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-242095-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:17:02.349473  786188 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-242095-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:17:02.349518  786188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 18:17:02.356623  786188 command_runner.go:130] > kubeadm
	I0307 18:17:02.356639  786188 command_runner.go:130] > kubectl
	I0307 18:17:02.356646  786188 command_runner.go:130] > kubelet
	I0307 18:17:02.356668  786188 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:17:02.356715  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0307 18:17:02.363580  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0307 18:17:02.375684  786188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:17:02.387570  786188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:17:02.390230  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:17:02.399218  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:17:02.399461  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:17:02.399461  786188 start.go:301] JoinCluster: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:17:02.399544  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 18:17:02.399593  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:17:02.463017  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:17:02.603231  786188 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:17:02.603307  786188 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:17:02.603344  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-242095-m02"
	I0307 18:17:02.640308  786188 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 18:17:02.666566  786188 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:17:02.666590  786188 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:17:02.666598  786188 command_runner.go:130] > OS: Linux
	I0307 18:17:02.666605  786188 command_runner.go:130] > CGROUPS_CPU: enabled
	I0307 18:17:02.666613  786188 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0307 18:17:02.666621  786188 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0307 18:17:02.666628  786188 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0307 18:17:02.666636  786188 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0307 18:17:02.666644  786188 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0307 18:17:02.666661  786188 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0307 18:17:02.666673  786188 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0307 18:17:02.666684  786188 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0307 18:17:02.747472  786188 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0307 18:17:02.747511  786188 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0307 18:17:02.774397  786188 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:17:02.774430  786188 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:17:02.774436  786188 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 18:17:02.863585  786188 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0307 18:17:04.380510  786188 command_runner.go:130] > This node has joined the cluster:
	I0307 18:17:04.380541  786188 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0307 18:17:04.380550  786188 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0307 18:17:04.380560  786188 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0307 18:17:04.383031  786188 command_runner.go:130] ! W0307 18:17:02.639882    1339 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:17:04.383072  786188 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:17:04.383085  786188 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:17:04.383108  786188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-242095-m02": (1.779744201s)
	I0307 18:17:04.383133  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 18:17:04.553919  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0307 18:17:04.553966  786188 start.go:303] JoinCluster complete in 2.154521785s
	I0307 18:17:04.553980  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:17:04.553987  786188 cni.go:136] 2 nodes found, recommending kindnet
	I0307 18:17:04.554030  786188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 18:17:04.557461  786188 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 18:17:04.557489  786188 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0307 18:17:04.557499  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2129263     Links: 1
	I0307 18:17:04.557510  786188 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:17:04.557519  786188 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:17:04.557527  786188 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:17:04.557534  786188 command_runner.go:130] > Change: 2023-03-07 18:01:35.631850484 +0000
	I0307 18:17:04.557541  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:04.557588  786188 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 18:17:04.557599  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 18:17:04.570601  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 18:17:04.729915  786188 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 18:17:04.733114  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 18:17:04.735252  786188 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 18:17:04.747502  786188 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 18:17:04.751561  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:17:04.751889  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:17:04.752290  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:17:04.752304  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.752315  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.752322  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.754283  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.754302  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.754310  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:17:04.754315  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.754321  786188 round_trippers.go:580]     Audit-Id: 6bda0436-6cff-4b57-bee7-a8697c5bbc6c
	I0307 18:17:04.754327  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.754337  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.754343  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.754350  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.754373  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"420","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 18:17:04.754463  786188 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-242095" context rescaled to 1 replicas
	I0307 18:17:04.754489  786188 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:17:04.757797  786188 out.go:177] * Verifying Kubernetes components...
	I0307 18:17:04.759265  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:04.769244  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:17:04.769474  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:17:04.769741  786188 node_ready.go:35] waiting up to 6m0s for node "multinode-242095-m02" to be "Ready" ...
	I0307 18:17:04.769803  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:04.769810  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.769820  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.769828  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.771655  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.771672  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.771679  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.771685  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.771691  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.771697  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.771702  786188 round_trippers.go:580]     Audit-Id: 5ab08e79-ef26-4515-968b-fc3732264a78
	I0307 18:17:04.771708  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.771855  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:04.772156  786188 node_ready.go:49] node "multinode-242095-m02" has status "Ready":"True"
	I0307 18:17:04.772188  786188 node_ready.go:38] duration metric: took 2.413192ms waiting for node "multinode-242095-m02" to be "Ready" ...
	I0307 18:17:04.772196  786188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:17:04.772247  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:17:04.772254  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.772261  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.772267  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.774872  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:04.774890  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.774898  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.774904  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.774910  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.774916  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.774923  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.774936  786188 round_trippers.go:580]     Audit-Id: d7d62a2a-e7fe-4ac9-ae28-42cc2fa44afa
	I0307 18:17:04.775429  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"466"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65879 chars]
	I0307 18:17:04.777438  786188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.777500  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:17:04.777508  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.777515  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.777521  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.779153  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.779173  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.779183  786188 round_trippers.go:580]     Audit-Id: b81b45b2-f52a-42fc-83b1-81f75a10d239
	I0307 18:17:04.779192  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.779201  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.779209  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.779222  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.779231  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.779341  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I0307 18:17:04.779878  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.779893  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.779905  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.779915  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.781445  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.781464  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.781474  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.781483  786188 round_trippers.go:580]     Audit-Id: 36d0bce5-eb66-4170-930c-123103ba6647
	I0307 18:17:04.781492  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.781501  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.781511  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.781527  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.781653  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.781955  786188 pod_ready.go:92] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.781966  786188 pod_ready.go:81] duration metric: took 4.5092ms waiting for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.781979  786188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.782030  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-242095
	I0307 18:17:04.782038  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.782045  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.782052  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.783513  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.783532  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.783542  786188 round_trippers.go:580]     Audit-Id: 75064e64-ad99-4f83-b506-f539966c9e5c
	I0307 18:17:04.783552  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.783562  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.783576  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.783589  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.783601  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.783684  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-242095","namespace":"kube-system","uid":"58a90a44-38a6-4150-b6a5-d68e1257f6f3","resourceVersion":"286","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.mirror":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.seen":"2023-03-07T18:16:19.703879850Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0307 18:17:04.784037  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.784048  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.784054  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.784062  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.785436  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.785452  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.785459  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.785464  786188 round_trippers.go:580]     Audit-Id: 0e99177c-b970-4d49-873e-746198f343af
	I0307 18:17:04.785470  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.785476  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.785482  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.785490  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.785599  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.785906  786188 pod_ready.go:92] pod "etcd-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.785920  786188 pod_ready.go:81] duration metric: took 3.927043ms waiting for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.785939  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.785988  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-242095
	I0307 18:17:04.785998  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.786009  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.786019  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.787572  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.787594  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.787605  786188 round_trippers.go:580]     Audit-Id: 327534ab-e94f-4189-8db5-10e32565ea6e
	I0307 18:17:04.787613  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.787619  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.787625  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.787634  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.787639  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.787745  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-242095","namespace":"kube-system","uid":"17d64e05-257c-45b2-bec2-6b363cbfb788","resourceVersion":"293","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.mirror":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.seen":"2023-03-07T18:16:19.703896620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0307 18:17:04.788224  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.788238  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.788249  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.788259  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.789739  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.789754  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.789764  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.789773  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.789786  786188 round_trippers.go:580]     Audit-Id: bcc7fb07-7b23-461a-af74-993a7168158d
	I0307 18:17:04.789798  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.789812  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.789822  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.789918  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.790302  786188 pod_ready.go:92] pod "kube-apiserver-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.790321  786188 pod_ready.go:81] duration metric: took 4.368106ms waiting for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.790337  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.790388  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-242095
	I0307 18:17:04.790399  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.790410  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.790424  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.791979  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.791996  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.792003  786188 round_trippers.go:580]     Audit-Id: 646a942d-c99c-4307-99f0-ab5d2cee3ee0
	I0307 18:17:04.792009  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.792014  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.792019  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.792024  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.792030  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.792162  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-242095","namespace":"kube-system","uid":"536246ee-9384-411a-bd3a-a3f3862a51bc","resourceVersion":"291","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.mirror":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.seen":"2023-03-07T18:16:19.703897932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0307 18:17:04.792511  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.792522  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.792529  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.792535  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.793926  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.793945  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.793952  786188 round_trippers.go:580]     Audit-Id: 76ba7c60-2007-4db3-924f-d34917cf3d9f
	I0307 18:17:04.793958  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.793965  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.793974  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.793994  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.794003  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.794086  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.794440  786188 pod_ready.go:92] pod "kube-controller-manager-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.794452  786188 pod_ready.go:81] duration metric: took 4.10398ms waiting for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.794463  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.970679  786188 request.go:622] Waited for 176.114787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:17:04.970735  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:17:04.970740  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.970747  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.970754  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.972924  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:04.972947  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.972955  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.972962  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.972972  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.972985  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.973000  786188 round_trippers.go:580]     Audit-Id: 1f5efaf3-dd1e-4e2f-9e8f-24a78d79c420
	I0307 18:17:04.973010  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.973133  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rjsmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"c20d9dc5-69a3-46f9-bdd7-7a54def58eac","resourceVersion":"382","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0307 18:17:05.169945  786188 request.go:622] Waited for 196.28407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:05.170014  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:05.170022  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.170032  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.170044  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.171957  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:05.171987  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.171995  786188 round_trippers.go:580]     Audit-Id: d2d078f8-df73-4307-8af3-3e6e92376332
	I0307 18:17:05.172001  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.172007  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.172012  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.172032  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.172041  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.172149  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:05.172476  786188 pod_ready.go:92] pod "kube-proxy-rjsmj" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:05.172488  786188 pod_ready.go:81] duration metric: took 378.016795ms waiting for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:05.172515  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbx65" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:05.369898  786188 request.go:622] Waited for 197.284834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:05.369956  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:05.369961  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.369969  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.369975  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.371962  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:05.371988  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.371999  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.372008  786188 round_trippers.go:580]     Audit-Id: 33371919-c646-4a6b-a5a1-5ae951e16acd
	I0307 18:17:05.372020  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.372028  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.372040  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.372052  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.372132  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"451","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0307 18:17:05.570854  786188 request.go:622] Waited for 198.337189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:05.570923  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:05.570930  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.570938  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.570945  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.573112  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:05.573143  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.573154  786188 round_trippers.go:580]     Audit-Id: e4fc431e-fada-45a8-8a7e-28cdfe70ed72
	I0307 18:17:05.573163  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.573173  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.573181  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.573189  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.573195  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.573303  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:06.074405  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:06.074431  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.074443  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.074451  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.076488  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:06.076513  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.076524  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.076534  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.076543  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.076552  786188 round_trippers.go:580]     Audit-Id: 41d104a9-c653-4be1-bf45-49fb0bf4596b
	I0307 18:17:06.076562  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.076575  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.076821  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"451","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0307 18:17:06.077315  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:06.077333  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.077345  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.077356  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.079034  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.079053  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.079062  786188 round_trippers.go:580]     Audit-Id: 816b0a92-7699-4b39-be91-cc4cec73a097
	I0307 18:17:06.079070  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.079078  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.079087  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.079096  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.079110  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.079184  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:06.573821  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:06.573842  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.573850  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.573856  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.575845  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.575870  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.575881  786188 round_trippers.go:580]     Audit-Id: 5db198d7-9c17-4e61-8505-50063c17e0bd
	I0307 18:17:06.575890  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.575897  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.575904  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.575917  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.575931  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.576038  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"472","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0307 18:17:06.576480  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:06.576493  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.576503  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.576511  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.578044  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.578070  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.578081  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.578090  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.578102  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.578116  786188 round_trippers.go:580]     Audit-Id: a6bad01d-adb7-4164-9afd-5324a0b8ee59
	I0307 18:17:06.578129  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.578142  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.578238  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:07.074114  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:07.074136  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.074144  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.074150  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.076312  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:07.076336  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.076348  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.076356  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.076365  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.076377  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.076395  786188 round_trippers.go:580]     Audit-Id: 019ed11c-1f24-4e21-99b5-211218cc820e
	I0307 18:17:07.076407  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.076534  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"475","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0307 18:17:07.076988  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:07.077001  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.077008  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.077017  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.078515  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.078536  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.078547  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.078556  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.078565  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.078578  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.078590  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.078598  786188 round_trippers.go:580]     Audit-Id: e9704b82-a5c0-42b8-9ce3-08ed6df69e9d
	I0307 18:17:07.078666  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:07.078950  786188 pod_ready.go:92] pod "kube-proxy-tbx65" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:07.078968  786188 pod_ready.go:81] duration metric: took 1.906438577s waiting for pod "kube-proxy-tbx65" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.078979  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.079088  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:17:07.079102  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.079109  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.079118  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.080680  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.080698  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.080708  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.080717  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.080726  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.080735  786188 round_trippers.go:580]     Audit-Id: 0e0b38f8-11a8-4ac1-ace6-6a36e85be0a3
	I0307 18:17:07.080748  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.080758  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.080847  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-242095","namespace":"kube-system","uid":"bd31dd93-d9b4-4f7a-9d31-d15d68702789","resourceVersion":"282","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.mirror":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.seen":"2023-03-07T18:16:19.703898726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0307 18:17:07.170394  786188 request.go:622] Waited for 89.236345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:07.170448  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:07.170454  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.170463  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.170469  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.172241  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.172264  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.172274  786188 round_trippers.go:580]     Audit-Id: 2416e965-e2a1-4f39-a1b8-08d51b3036c9
	I0307 18:17:07.172282  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.172290  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.172300  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.172316  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.172324  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.172432  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:07.172845  786188 pod_ready.go:92] pod "kube-scheduler-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:07.172860  786188 pod_ready.go:81] duration metric: took 93.86566ms waiting for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.172873  786188 pod_ready.go:38] duration metric: took 2.40066872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:17:07.172901  786188 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:17:07.172955  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:07.183664  786188 system_svc.go:56] duration metric: took 10.753644ms WaitForService to wait for kubelet.
	I0307 18:17:07.183694  786188 kubeadm.go:578] duration metric: took 2.429179914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 18:17:07.183721  786188 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:17:07.370009  786188 request.go:622] Waited for 186.206724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0307 18:17:07.370061  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0307 18:17:07.370067  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.370077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.370093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.372224  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:07.372245  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.372252  786188 round_trippers.go:580]     Audit-Id: 8fe4bee5-321b-4155-8194-97f7514122c6
	I0307 18:17:07.372258  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.372267  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.372280  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.372294  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.372303  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.372511  786188 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0307 18:17:07.373014  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:17:07.373029  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:17:07.373039  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:17:07.373048  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:17:07.373056  786188 node_conditions.go:105] duration metric: took 189.325303ms to run NodePressure ...
	I0307 18:17:07.373071  786188 start.go:228] waiting for startup goroutines ...
	I0307 18:17:07.373097  786188 start.go:242] writing updated cluster config ...
	I0307 18:17:07.373373  786188 ssh_runner.go:195] Run: rm -f paused
	I0307 18:17:07.438086  786188 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
	I0307 18:17:07.441217  786188 out.go:177] * Done! kubectl is now configured to use "multinode-242095" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-03-07 18:16:02 UTC, end at Tue 2023-03-07 18:17:13 UTC. --
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.734632497Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735312424Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735331053Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735355546Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735365363Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735395423Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735418868Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735470681Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735504291Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735551786Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735565628Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735802269Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735837685Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.736250182Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.747143391Z" level=info msg="Loading containers: start."
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.824704209Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.857862864Z" level=info msg="Loading containers: done."
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.866502667Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.866547440Z" level=info msg="Daemon has completed initialization"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.879122453Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 07 18:16:05 multinode-242095 systemd[1]: Started Docker Application Container Engine.
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.886202533Z" level=info msg="API listen on [::]:2376"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.890155282Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 07 18:16:48 multinode-242095 dockerd[942]: time="2023-03-07T18:16:48.613799363Z" level=info msg="ignoring event" container=0cd1efd5f84995fe9a1cd5f12f40cab5d19af6fc5bcc048cf5593a0364d25282 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 18:16:48 multinode-242095 dockerd[942]: time="2023-03-07T18:16:48.681515987Z" level=info msg="ignoring event" container=a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	2d38b72e17f8d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 seconds ago        Running             busybox                   0                   3b793adae6b73
	ca75755eaea14       5185b96f0becf                                                                                         25 seconds ago       Running             coredns                   1                   9ce64715198e5
	282af577ac38b       kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07              38 seconds ago       Running             kindnet-cni               0                   295ceb8066cdb
	0cd1efd5f8499       5185b96f0becf                                                                                         38 seconds ago       Exited              coredns                   0                   a3f26daf9a048
	639eded705175       6e38f40d628db                                                                                         39 seconds ago       Running             storage-provisioner       0                   26360294bb1ca
	bd0a44cc6e392       6f64e7135a6ec                                                                                         40 seconds ago       Running             kube-proxy                0                   0819928f73228
	02c3cc1dc6e48       db8f409d9a5d7                                                                                         About a minute ago   Running             kube-scheduler            0                   66ddf08048c9a
	0ecca898654fc       240e201d5b0d8                                                                                         About a minute ago   Running             kube-controller-manager   0                   d747de4f228aa
	3a83d434102f0       63d3239c3c159                                                                                         About a minute ago   Running             kube-apiserver            0                   acdde955f0646
	acdfade9da182       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   90379e88a49cf
	
	* 
	* ==> coredns [0cd1efd5f849] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:45668 - 59500 "HINFO IN 5563505768494837254.4717279030372536171. udp 57 false 512" - - 0 5.000140106s
	[ERROR] plugin/errors: 2 5563505768494837254.4717279030372536171. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50721 - 9079 "HINFO IN 5563505768494837254.4717279030372536171. udp 57 false 512" - - 0 5.000078385s
	[ERROR] plugin/errors: 2 5563505768494837254.4717279030372536171. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [ca75755eaea1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52841 - 53274 "HINFO IN 6905225731476761590.1859547329355607286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00997676s
	[INFO] 10.244.0.3:58612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190161s
	[INFO] 10.244.0.3:37638 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.019184904s
	[INFO] 10.244.0.3:34309 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000616292s
	[INFO] 10.244.0.3:59850 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009421284s
	[INFO] 10.244.0.3:36390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128365s
	[INFO] 10.244.0.3:45601 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004941893s
	[INFO] 10.244.0.3:53754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145115s
	[INFO] 10.244.0.3:44417 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154297s
	[INFO] 10.244.0.3:45670 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011981764s
	[INFO] 10.244.0.3:35547 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157822s
	[INFO] 10.244.0.3:51913 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137317s
	[INFO] 10.244.0.3:46444 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093313s
	[INFO] 10.244.0.3:56821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159548s
	[INFO] 10.244.0.3:49995 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010494s
	[INFO] 10.244.0.3:49513 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097357s
	[INFO] 10.244.0.3:49797 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098899s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-242095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-242095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
	                    minikube.k8s.io/name=multinode-242095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_07T18_16_20_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-242095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-242095
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d9a686e4a545abb7cfdc7dc7b2947f
	  System UUID:                8846a9c2-9acf-44c6-8c4e-298bf897e420
	  Boot ID:                    f01f161d-486d-4652-b75e-ddd4310bc409
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-rfr2n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-787d4945fb-fsll9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     41s
	  kube-system                 etcd-multinode-242095                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-4sm84                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-apiserver-multinode-242095             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-multinode-242095    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-proxy-rjsmj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-multinode-242095             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 39s   kube-proxy       
	  Normal  Starting                 54s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s   kubelet          Node multinode-242095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s   kubelet          Node multinode-242095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s   kubelet          Node multinode-242095 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             54s   kubelet          Node multinode-242095 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  53s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s   kubelet          Node multinode-242095 status is now: NodeReady
	  Normal  RegisteredNode           41s   node-controller  Node multinode-242095 event: Registered Node multinode-242095 in Controller
	
	
	Name:               multinode-242095-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-242095-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-242095-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-242095-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d9a686e4a545abb7cfdc7dc7b2947f
	  System UUID:                3fd908b3-2170-488e-8e30-9fff994820a6
	  Boot ID:                    f01f161d-486d-4652-b75e-ddd4310bc409
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-jvgsd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-j52z6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10s
	  kube-system                 kube-proxy-tbx65            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7s                 kube-proxy       
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x2 over 10s)  kubelet          Node multinode-242095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x2 over 10s)  kubelet          Node multinode-242095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x2 over 10s)  kubelet          Node multinode-242095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9s                 kubelet          Node multinode-242095-m02 status is now: NodeReady
	  Normal  RegisteredNode           6s                 node-controller  Node multinode-242095-m02 event: Registered Node multinode-242095-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.006304] FS-Cache: N-cookie c=0000001c [p=00000012 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=00000000aa45e9c8
	[  +0.008750] FS-Cache: N-key=[8] 'c8a20f0200000000'
	[  +4.456100] FS-Cache: Duplicate cookie detected
	[  +0.004739] FS-Cache: O-cookie c=00000015 [p=00000012 fl=226 nc=0 na=1]
	[  +0.006752] FS-Cache: O-cookie d=000000003c89735b{9p.inode} n=00000000a40be2fd
	[  +0.007366] FS-Cache: O-key=[8] 'c7a20f0200000000'
	[  +0.004962] FS-Cache: N-cookie c=0000001e [p=00000012 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=0000000081b67dfd
	[  +0.008746] FS-Cache: N-key=[8] 'c7a20f0200000000'
	[  +0.610772] FS-Cache: Duplicate cookie detected
	[  +0.004706] FS-Cache: O-cookie c=00000018 [p=00000012 fl=226 nc=0 na=1]
	[  +0.006763] FS-Cache: O-cookie d=000000003c89735b{9p.inode} n=0000000085ae5241
	[  +0.007353] FS-Cache: O-key=[8] 'cca20f0200000000'
	[  +0.004916] FS-Cache: N-cookie c=0000001f [p=00000012 fl=2 nc=0 na=1]
	[  +0.006600] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=00000000cfef0339
	[  +0.008725] FS-Cache: N-key=[8] 'cca20f0200000000'
	[ +11.432134] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 54 93 73 69 4f 08 06
	[Mar 7 18:11] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 57 14 d9 a5 58 08 06
	[  +0.191019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a be 16 89 59 4e 08 06
	[Mar 7 18:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 85 8e 3d d0 47 08 06
	
	* 
	* ==> etcd [acdfade9da18] <==
	* {"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-03-07T18:16:14.119Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-03-07T18:16:15.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-242095 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.114Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-07T18:16:15.114Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-03-07T18:16:55.122Z","caller":"traceutil/trace.go:171","msg":"trace[1470268138] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"149.242016ms","start":"2023-03-07T18:16:54.973Z","end":"2023-03-07T18:16:55.122Z","steps":["trace[1470268138] 'process raft request'  (duration: 149.099672ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:17:13 up  1:59,  0 users,  load average: 2.60, 2.37, 2.27
	Linux multinode-242095 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [282af577ac38] <==
	* I0307 18:16:35.893479       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0307 18:16:35.893519       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0307 18:16:35.893646       1 main.go:116] setting mtu 1500 for CNI 
	I0307 18:16:35.893668       1 main.go:146] kindnetd IP family: "ipv4"
	I0307 18:16:35.893680       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0307 18:16:36.195264       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:36.195290       1 main.go:227] handling current node
	I0307 18:16:46.306691       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:46.306722       1 main.go:227] handling current node
	I0307 18:16:56.319046       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:56.319077       1 main.go:227] handling current node
	I0307 18:17:06.330915       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:17:06.330940       1 main.go:227] handling current node
	I0307 18:17:06.330950       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0307 18:17:06.330955       1 main.go:250] Node multinode-242095-m02 has CIDR [10.244.1.0/24] 
	I0307 18:17:06.331131       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [3a83d434102f] <==
	* I0307 18:16:16.804470       1 controller.go:615] quota admission added evaluator for: namespaces
	E0307 18:16:16.810325       1 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: namespaces "kube-system" not found
	I0307 18:16:16.893406       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 18:16:16.893417       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 18:16:16.893765       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0307 18:16:16.893803       1 shared_informer.go:280] Caches are synced for configmaps
	I0307 18:16:16.894107       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0307 18:16:16.894136       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0307 18:16:17.012372       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 18:16:17.486290       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 18:16:17.697596       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0307 18:16:17.701154       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0307 18:16:17.701176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 18:16:18.060179       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 18:16:18.090497       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 18:16:18.150330       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0307 18:16:18.155383       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0307 18:16:18.156323       1 controller.go:615] quota admission added evaluator for: endpoints
	I0307 18:16:18.159820       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 18:16:18.716049       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0307 18:16:19.601328       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0307 18:16:19.609959       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0307 18:16:19.617679       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0307 18:16:32.704779       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0307 18:16:32.886267       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0ecca898654f] <==
	* I0307 18:16:32.897860       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rjsmj"
	I0307 18:16:32.914948       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0307 18:16:32.915841       1 shared_informer.go:280] Caches are synced for taint
	I0307 18:16:32.915928       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0307 18:16:32.915955       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	I0307 18:16:32.916011       1 taint_manager.go:211] "Sending events to api server"
	W0307 18:16:32.916029       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-242095. Assuming now as a timestamp.
	I0307 18:16:32.916078       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0307 18:16:32.916346       1 event.go:294] "Event occurred" object="multinode-242095" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-242095 event: Registered Node multinode-242095 in Controller"
	I0307 18:16:32.919053       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0307 18:16:33.036088       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0307 18:16:33.046915       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-nghc4"
	I0307 18:16:33.301733       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:16:33.391572       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:16:33.391604       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0307 18:17:03.875961       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-242095-m02" does not exist
	I0307 18:17:03.882618       1 range_allocator.go:372] Set node multinode-242095-m02 PodCIDR to [10.244.1.0/24]
	I0307 18:17:03.885476       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tbx65"
	I0307 18:17:03.885499       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j52z6"
	W0307 18:17:04.491974       1 topologycache.go:232] Can't get CPU or zone information for multinode-242095-m02 node
	W0307 18:17:07.920885       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-242095-m02. Assuming now as a timestamp.
	I0307 18:17:07.920926       1 event.go:294] "Event occurred" object="multinode-242095-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-242095-m02 event: Registered Node multinode-242095-m02 in Controller"
	I0307 18:17:08.478780       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0307 18:17:08.486660       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-jvgsd"
	I0307 18:17:08.490112       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-rfr2n"
	
	* 
	* ==> kube-proxy [bd0a44cc6e39] <==
	* I0307 18:16:34.102314       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0307 18:16:34.102399       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0307 18:16:34.102423       1 server_others.go:535] "Using iptables proxy"
	I0307 18:16:34.122840       1 server_others.go:176] "Using iptables Proxier"
	I0307 18:16:34.122873       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0307 18:16:34.122881       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0307 18:16:34.122895       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0307 18:16:34.122922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 18:16:34.123223       1 server.go:655] "Version info" version="v1.26.2"
	I0307 18:16:34.123235       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:16:34.123735       1 config.go:317] "Starting service config controller"
	I0307 18:16:34.123758       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0307 18:16:34.123906       1 config.go:226] "Starting endpoint slice config controller"
	I0307 18:16:34.123935       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0307 18:16:34.124174       1 config.go:444] "Starting node config controller"
	I0307 18:16:34.124191       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0307 18:16:34.223848       1 shared_informer.go:280] Caches are synced for service config
	I0307 18:16:34.224890       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0307 18:16:34.224912       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [02c3cc1dc6e4] <==
	* W0307 18:16:16.808743       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:16:16.809716       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:16:16.808828       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:16.809814       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:16.808838       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:16:16.809912       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 18:16:16.808889       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:16:16.809986       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:16:16.808958       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:16.810078       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:16.809019       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 18:16:16.810153       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 18:16:16.809385       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:16:16.810219       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 18:16:17.709841       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:16:17.709869       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 18:16:17.783000       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:16:17.783040       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:16:17.827759       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:17.827853       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:17.859049       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:16:17.859098       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:16:17.942212       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:16:17.942272       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0307 18:16:18.304019       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-03-07 18:16:02 UTC, end at Tue 2023-03-07 18:17:13 UTC. --
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991671    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-kube-proxy\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991723    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-lib-modules\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991764    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7s8\" (UniqueName: \"kubernetes.io/projected/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-kube-api-access-fj7s8\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991793    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-cni-cfg\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991820    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-xtables-lock\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991867    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-xtables-lock\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991915    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-lib-modules\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991948    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xq9v\" (UniqueName: \"kubernetes.io/projected/c406577e-74d2-4d81-b8a4-c827a78e2d61-kube-api-access-5xq9v\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.941332    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.998223    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea1890f3-3928-474e-8b2d-10da6a0e9f14-tmp\") pod \"storage-provisioner\" (UID: \"ea1890f3-3928-474e-8b2d-10da6a0e9f14\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.998278    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/ea1890f3-3928-474e-8b2d-10da6a0e9f14-kube-api-access-j9kgp\") pod \"storage-provisioner\" (UID: \"ea1890f3-3928-474e-8b2d-10da6a0e9f14\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.423519    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.503139    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pllm\" (UniqueName: \"kubernetes.io/projected/17db7207-f2ce-4566-85fc-dc7e0eb65d09-kube-api-access-7pllm\") pod \"coredns-787d4945fb-fsll9\" (UID: \"17db7207-f2ce-4566-85fc-dc7e0eb65d09\") " pod="kube-system/coredns-787d4945fb-fsll9"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.503192    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17db7207-f2ce-4566-85fc-dc7e0eb65d09-config-volume\") pod \"coredns-787d4945fb-fsll9\" (UID: \"17db7207-f2ce-4566-85fc-dc7e0eb65d09\") " pod="kube-system/coredns-787d4945fb-fsll9"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.298565    2285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.315728    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rjsmj" podStartSLOduration=3.315683855 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:35.108304994 +0000 UTC m=+15.526106179" watchObservedRunningTime="2023-03-07 18:16:35.315683855 +0000 UTC m=+15.733485043"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.315911    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.315878703 pod.CreationTimestamp="2023-03-07 18:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:35.315381924 +0000 UTC m=+15.733183131" watchObservedRunningTime="2023-03-07 18:16:35.315878703 +0000 UTC m=+15.733679902"
	Mar 07 18:16:36 multinode-242095 kubelet[2285]: I0307 18:16:36.339765    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fsll9" podStartSLOduration=4.339720505 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:36.339542114 +0000 UTC m=+16.757343302" watchObservedRunningTime="2023-03-07 18:16:36.339720505 +0000 UTC m=+16.757521693"
	Mar 07 18:16:36 multinode-242095 kubelet[2285]: I0307 18:16:36.340117    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4sm84" podStartSLOduration=-9.2233720325147e+09 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="2023-03-07 18:16:33.82479639 +0000 UTC m=+14.242597570" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:36.325700598 +0000 UTC m=+16.743501786" watchObservedRunningTime="2023-03-07 18:16:36.340075894 +0000 UTC m=+16.757877126"
	Mar 07 18:16:40 multinode-242095 kubelet[2285]: I0307 18:16:40.402347    2285 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 07 18:16:40 multinode-242095 kubelet[2285]: I0307 18:16:40.402972    2285 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 07 18:16:49 multinode-242095 kubelet[2285]: I0307 18:16:49.407257    2285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef"
	Mar 07 18:17:08 multinode-242095 kubelet[2285]: I0307 18:17:08.497076    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:17:08 multinode-242095 kubelet[2285]: I0307 18:17:08.607732    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8n6h\" (UniqueName: \"kubernetes.io/projected/8887f3ec-66e6-4c54-9bd5-b93fe0e31681-kube-api-access-n8n6h\") pod \"busybox-6b86dd6d48-rfr2n\" (UID: \"8887f3ec-66e6-4c54-9bd5-b93fe0e31681\") " pod="default/busybox-6b86dd6d48-rfr2n"
	Mar 07 18:17:10 multinode-242095 kubelet[2285]: I0307 18:17:10.555078    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-rfr2n" podStartSLOduration=-9.223372034299732e+09 pod.CreationTimestamp="2023-03-07 18:17:08 +0000 UTC" firstStartedPulling="2023-03-07 18:17:09.083031242 +0000 UTC m=+49.500832426" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:17:10.554808963 +0000 UTC m=+50.972610151" watchObservedRunningTime="2023-03-07 18:17:10.555043611 +0000 UTC m=+50.972844798"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-242095 -n multinode-242095
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-242095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (6.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-242095 -- exec busybox-6b86dd6d48-jvgsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242095
helpers_test.go:235: (dbg) docker inspect multinode-242095:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614",
	        "Created": "2023-03-07T18:16:01.32530949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 787176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-07T18:16:01.685272554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ecf2c9654f2209c81fb249115d75cf7afa5e279e652d4cd7020a24755fb1b573",
	        "ResolvConfPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/hosts",
	        "LogPath": "/var/lib/docker/containers/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614-json.log",
	        "Name": "/multinode-242095",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-242095:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-242095",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146-init/diff:/var/lib/docker/overlay2/919a933f2f65520d4dce55a67e6fc895e1b57558817c17c68c3332371c6bf864/diff:/var/lib/docker/overlay2/d65d2f46f6aacad358deb1fbc32f4b3a6f2fd572153e557e20e2df4757968368/diff:/var/lib/docker/overlay2/1518c45a2e2dce1bbb8c9aa4cc363e93df3a98ef694726780450519e31bd238c/diff:/var/lib/docker/overlay2/af8137f485c43770b622b3c06682d147962e52a09024fe1127c4012bd2b16dd1/diff:/var/lib/docker/overlay2/0c39a9c32c3420d15952bdaed361fbf502b7b7ec06ae5006d34ac5aebdd52b2e/diff:/var/lib/docker/overlay2/4b4c7c8f39851d9c713bdf69c47cda85bf28a7abbccd1efbfbfd2094a59ecf74/diff:/var/lib/docker/overlay2/a226f271a7dce28a16bd03338f4305d4cc5942639ea048bddc52d90676d5dadb/diff:/var/lib/docker/overlay2/798bf3f5849c5b37e64db134f6d6f0a76c77c3bc41de7f27b100e37cec888b0a/diff:/var/lib/docker/overlay2/8a955ad3c07447aaef0bf72a4fdd9c80dee7dd7b664319328958e91aa47723a5/diff:/var/lib/docker/overlay2/287fdc
6bbb7638b228c8d48f3a27342f66ab418bb1a026e7f4042650bac659c3/diff:/var/lib/docker/overlay2/56da69234005db78c51a0283d4e9cf00d88eb2f09ad16065d3d63438cab72528/diff:/var/lib/docker/overlay2/8adf80e19b4d86d17c4aab98e63b03e64aadfc167208fbd7a138f9351850ba3e/diff:/var/lib/docker/overlay2/b5fb9d46cd71c46fa6b95af53d84645498fefb689b6b7a8271ac64ef8b14f873/diff:/var/lib/docker/overlay2/7b83e52a5eeed93b87fdfb42fedd5e20a65c16b364a3173a412139fe87666842/diff:/var/lib/docker/overlay2/038dd5daad1ba03ab8124e662dbde6e352fd60b0920f19aa4b5f23f2c5d42e86/diff:/var/lib/docker/overlay2/29f9b656ab67e0347a7932337b58ccdf3f0846944fb64bb3e8b92d5150ccf75a/diff:/var/lib/docker/overlay2/e70566b6845919f6e856944a62e104bb99474342dcf9c33d0aa70679016659b4/diff:/var/lib/docker/overlay2/100bebad8422b6c9015de0846e887bd4347808552610c6b8c149e2030e4c0a1d/diff:/var/lib/docker/overlay2/b06220b91f876ad14e77f8e058436c8bd48a61be6c4ab1640a1abcbae75f9168/diff:/var/lib/docker/overlay2/630d79d8012f6bc27cf10474af460d069e1f90e142404e900cd06c51a4f4b3d2/diff:/var/lib/d
ocker/overlay2/54b4a84e08cfd660941bb5bd24b4b08c366980a2f60d6b5e7387d3bb4b7a20ae/diff:/var/lib/docker/overlay2/fe377ebe6634536001708957e3f740e03a688f0ed64d61c3a8a800d6b36cb0d4/diff:/var/lib/docker/overlay2/8b95cb1cb3d13f3c9c52bce66d5e61d04de2702ac15553fb72587d9589ec4d57/diff:/var/lib/docker/overlay2/6994ab173d1db5859fcc37a2387f6fd0ca92af2299f5f9b179c3a6de26e89965/diff:/var/lib/docker/overlay2/337e602a34c14c5c38c5e10f0742ae43130b8b6cc3cc07a10763f130a5809b5e/diff:/var/lib/docker/overlay2/65895bc80ed0fde627f0fe5b2eb2ff6c54ca83950ff46dbdb471f0629138b7da/diff:/var/lib/docker/overlay2/6a465ef5ff9312a8b2abf1d0eb61d0fc524542eb7c2d3836e42ac6cf9842233b/diff:/var/lib/docker/overlay2/c51e98fe15f6aff45a9234653cbc07f7d8e592c01233419c2aeb78b30f89b20f/diff:/var/lib/docker/overlay2/d6e942ab4944c8ad54cf1b8d146bfe8b2ff2a324e047c5c41f2451a2abe244e8/diff:/var/lib/docker/overlay2/07c1a0226e0ae9bc5a8a0dce15c688680a6802044b66d6b0087a3c904611d32a/diff:/var/lib/docker/overlay2/3e1ff08623a31836a6a7b281fbd7a3263b0e5e208067c31c91b8219fee6
31657/diff:/var/lib/docker/overlay2/231c5ac90e2b3b243b1a99debfa6af60f7d054158434d870678ff2ed600ed2b0/diff:/var/lib/docker/overlay2/658d368b80a6e77c7f4230d0cb1ea8ac7029426c32eeabbfe8aa64c69d696068/diff:/var/lib/docker/overlay2/422b8e31c25887c4d52d9d069ad2d1dbf68925474f40f49ee1097f62df7ad9e5/diff:/var/lib/docker/overlay2/6ae593499dac8a42852bb4bd3d84df42e373dbc6b211eee190c3a8785413ccf4/diff:/var/lib/docker/overlay2/9e6c3b3c7f3cee8a4b0334d3409c9e39a202d21b9f673a0c0f9a8bab27f4ce61/diff:/var/lib/docker/overlay2/2a49fdc47125f029948d1de86932b652faf6358a5f4b0cf15ec05a421f7c3678/diff:/var/lib/docker/overlay2/5a19340fed828a972bff409c5893b3125502e8ceb550fe2d8991015605076aff/diff:/var/lib/docker/overlay2/b4a05bca1441de84af28208a839c75f4af1f24c217fdaaf06b76e82f01fc4d25/diff:/var/lib/docker/overlay2/3ccc3fc608334d7d73cd5b962f04073071dd5372a6603c7804ba2593976c76e4/diff:/var/lib/docker/overlay2/973d6264ada7511f823a30269486644fc007fd9ec97372d6379a99aa5f2ad215/diff:/var/lib/docker/overlay2/7c8ac5df9ef9fdd654a01e1b48006689cf1fe0
24d2b06df6e41afb0d0d1ec5d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5ad86954620e66779f8210beec766f249d89cf2ac672812b98b9876df02e146/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-242095",
	                "Source": "/var/lib/docker/volumes/multinode-242095/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-242095",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-242095",
	                "name.minikube.sigs.k8s.io": "multinode-242095",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8ab9b460df607dd60f8f2d7fc2844b5153a34764e944187ba30eb87168d23cf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8ab9b460df6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-242095": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1953c0fdb57",
	                        "multinode-242095"
	                    ],
	                    "NetworkID": "d64b017e2b06dcf471040ca17ec801bfd97cfecf0860c7ece05de26ea5806633",
	                    "EndpointID": "9931b741d35b07aba5ba0ceeba050940fa5e3ac65c8327d49ca9f042163c37a9",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-242095 -n multinode-242095
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 logs -n 25: (1.135244206s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p first-421668                                   | first-421668         | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-1-794899 ssh -- ls                    | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| ssh     | mount-start-2-811521 ssh -- ls                    | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-811521                           | mount-start-2-811521 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| delete  | -p mount-start-1-794899                           | mount-start-1-794899 | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:15 UTC |
	| start   | -p multinode-242095                               | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:15 UTC | 07 Mar 23 18:17 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- apply -f                   | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- rollout                    | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- get pods -o                | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- get pods -o                | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC |                     |
	|         | busybox-6b86dd6d48-jvgsd -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-rfr2n -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- get pods -o                | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-242095 -- exec                       | multinode-242095     | jenkins | v1.29.0 | 07 Mar 23 18:17 UTC | 07 Mar 23 18:17 UTC |
	|         | busybox-6b86dd6d48-jvgsd                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:15:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:15:54.931777  786188 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:15:54.932212  786188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:15:54.932232  786188 out.go:309] Setting ErrFile to fd 2...
	I0307 18:15:54.932240  786188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:15:54.932496  786188 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:15:54.933613  786188 out.go:303] Setting JSON to false
	I0307 18:15:54.934936  786188 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7106,"bootTime":1678205849,"procs":744,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:15:54.935000  786188 start.go:135] virtualization: kvm guest
	I0307 18:15:54.937189  786188 out.go:177] * [multinode-242095] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:15:54.939220  786188 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:15:54.939058  786188 notify.go:220] Checking for updates...
	I0307 18:15:54.940809  786188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:15:54.942432  786188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:15:54.944014  786188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:15:54.945431  786188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:15:54.946842  786188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:15:54.948558  786188 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:15:55.017643  786188 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0307 18:15:55.017759  786188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:15:55.134130  786188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-07 18:15:55.12526107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:15:55.134265  786188 docker.go:294] overlay module found
	I0307 18:15:55.136437  786188 out.go:177] * Using the docker driver based on user configuration
	I0307 18:15:55.137805  786188 start.go:296] selected driver: docker
	I0307 18:15:55.137816  786188 start.go:857] validating driver "docker" against <nil>
	I0307 18:15:55.137831  786188 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:15:55.138561  786188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:15:55.253976  786188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-07 18:15:55.246025628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:15:55.254123  786188 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 18:15:55.254384  786188 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:15:55.256432  786188 out.go:177] * Using Docker driver with root privileges
	I0307 18:15:55.258010  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:15:55.258025  786188 cni.go:136] 0 nodes found, recommending kindnet
	I0307 18:15:55.258035  786188 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:15:55.258050  786188 start_flags.go:319] config:
	{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:15:55.259771  786188 out.go:177] * Starting control plane node multinode-242095 in cluster multinode-242095
	I0307 18:15:55.261202  786188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0307 18:15:55.262657  786188 out.go:177] * Pulling base image ...
	I0307 18:15:55.264006  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:15:55.264030  786188 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon
	I0307 18:15:55.264045  786188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 18:15:55.264057  786188 cache.go:57] Caching tarball of preloaded images
	I0307 18:15:55.264178  786188 preload.go:174] Found /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 18:15:55.264191  786188 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 18:15:55.264547  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:15:55.264573  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json: {Name:mkca2eae4602c84e1e5460196b84850da3483521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:15:55.326841  786188 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon, skipping pull
	I0307 18:15:55.326866  786188 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 exists in daemon, skipping load
	I0307 18:15:55.326885  786188 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:15:55.326942  786188 start.go:364] acquiring machines lock for multinode-242095: {Name:mk8dbb7646a5affb9e9bdbf371579a97af9f6e48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:15:55.327072  786188 start.go:368] acquired machines lock for "multinode-242095" in 100.418µs
	I0307 18:15:55.327111  786188 start.go:93] Provisioning new machine with config: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:15:55.327206  786188 start.go:125] createHost starting for "" (driver="docker")
	I0307 18:15:55.329448  786188 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 18:15:55.329736  786188 start.go:159] libmachine.API.Create for "multinode-242095" (driver="docker")
	I0307 18:15:55.329773  786188 client.go:168] LocalClient.Create starting
	I0307 18:15:55.329849  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem
	I0307 18:15:55.329890  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:15:55.329910  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:15:55.329993  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem
	I0307 18:15:55.330021  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:15:55.330036  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:15:55.330410  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 18:15:55.393404  786188 cli_runner.go:211] docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 18:15:55.393472  786188 network_create.go:281] running [docker network inspect multinode-242095] to gather additional debugging logs...
	I0307 18:15:55.393491  786188 cli_runner.go:164] Run: docker network inspect multinode-242095
	W0307 18:15:55.455828  786188 cli_runner.go:211] docker network inspect multinode-242095 returned with exit code 1
	I0307 18:15:55.455875  786188 network_create.go:284] error running [docker network inspect multinode-242095]: docker network inspect multinode-242095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242095 not found
	I0307 18:15:55.455887  786188 network_create.go:286] output of [docker network inspect multinode-242095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242095 not found
	
	** /stderr **
	I0307 18:15:55.455936  786188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:15:55.516686  786188 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ff68d98ad1f6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:f1:e9:e0} reservation:<nil>}
	I0307 18:15:55.517213  786188 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014d74b0}
	I0307 18:15:55.517242  786188 network_create.go:123] attempt to create docker network multinode-242095 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0307 18:15:55.517291  786188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242095 multinode-242095
	I0307 18:15:55.610545  786188 network_create.go:107] docker network multinode-242095 192.168.58.0/24 created
	I0307 18:15:55.610577  786188 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-242095" container
	I0307 18:15:55.610632  786188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 18:15:55.673127  786188 cli_runner.go:164] Run: docker volume create multinode-242095 --label name.minikube.sigs.k8s.io=multinode-242095 --label created_by.minikube.sigs.k8s.io=true
	I0307 18:15:55.737977  786188 oci.go:103] Successfully created a docker volume multinode-242095
	I0307 18:15:55.738085  786188 cli_runner.go:164] Run: docker run --rm --name multinode-242095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095 --entrypoint /usr/bin/test -v multinode-242095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -d /var/lib
	I0307 18:15:56.315501  786188 oci.go:107] Successfully prepared a docker volume multinode-242095
	I0307 18:15:56.315544  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:15:56.315567  786188 kic.go:190] Starting extracting preloaded images to volume ...
	I0307 18:15:56.315636  786188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 18:16:01.146072  786188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.830376075s)
	I0307 18:16:01.146121  786188 kic.go:199] duration metric: took 4.830547 seconds to extract preloaded images to volume
	W0307 18:16:01.146295  786188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 18:16:01.146448  786188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 18:16:01.262141  786188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-242095 --name multinode-242095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-242095 --network multinode-242095 --ip 192.168.58.2 --volume multinode-242095:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9
	I0307 18:16:01.692913  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Running}}
	I0307 18:16:01.761029  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:01.830535  786188 cli_runner.go:164] Run: docker exec multinode-242095 stat /var/lib/dpkg/alternatives/iptables
	I0307 18:16:01.949015  786188 oci.go:144] the created container "multinode-242095" has a running status.
	I0307 18:16:01.949052  786188 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa...
	I0307 18:16:02.153840  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0307 18:16:02.153892  786188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 18:16:02.270926  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:02.341390  786188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 18:16:02.341419  786188 kic_runner.go:114] Args: [docker exec --privileged multinode-242095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 18:16:02.460882  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:02.525919  786188 machine.go:88] provisioning docker machine ...
	I0307 18:16:02.525957  786188 ubuntu.go:169] provisioning hostname "multinode-242095"
	I0307 18:16:02.526029  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:02.592509  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:02.592972  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:02.592990  786188 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-242095 && echo "multinode-242095" | sudo tee /etc/hostname
	I0307 18:16:02.712150  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-242095
	
	I0307 18:16:02.712232  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:02.775415  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:02.775870  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:02.775893  786188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-242095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-242095/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-242095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:16:02.882900  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:16:02.882928  786188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15985-636026/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-636026/.minikube}
	I0307 18:16:02.882949  786188 ubuntu.go:177] setting up certificates
	I0307 18:16:02.882959  786188 provision.go:83] configureAuth start
	I0307 18:16:02.883018  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:02.944427  786188 provision.go:138] copyHostCerts
	I0307 18:16:02.944470  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:02.944501  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem, removing ...
	I0307 18:16:02.944511  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:02.944582  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem (1123 bytes)
	I0307 18:16:02.944658  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:02.944680  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem, removing ...
	I0307 18:16:02.944687  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:02.944713  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem (1679 bytes)
	I0307 18:16:02.944774  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:02.944792  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem, removing ...
	I0307 18:16:02.944801  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:02.944829  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem (1082 bytes)
	I0307 18:16:02.944886  786188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem org=jenkins.multinode-242095 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-242095]
	I0307 18:16:03.173353  786188 provision.go:172] copyRemoteCerts
	I0307 18:16:03.173413  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:16:03.173453  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.236744  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:03.318491  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 18:16:03.318553  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:16:03.335541  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 18:16:03.335587  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 18:16:03.351784  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 18:16:03.351827  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:16:03.367856  786188 provision.go:86] duration metric: configureAuth took 484.880035ms
	I0307 18:16:03.367876  786188 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:16:03.368041  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:03.368096  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.431225  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.431689  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.431711  786188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 18:16:03.547069  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0307 18:16:03.547100  786188 ubuntu.go:71] root file system type: overlay
	I0307 18:16:03.547249  786188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 18:16:03.547322  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.610630  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.611064  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.611124  786188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 18:16:03.727539  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 18:16:03.727609  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:03.790236  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:03.790664  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I0307 18:16:03.790685  786188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 18:16:04.418495  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-07 18:16:03.723139738 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0307 18:16:04.418534  786188 machine.go:91] provisioned docker machine in 1.892591812s
	I0307 18:16:04.418549  786188 client.go:171] LocalClient.Create took 9.088765764s
	I0307 18:16:04.418571  786188 start.go:167] duration metric: libmachine.API.Create for "multinode-242095" took 9.088835023s
	I0307 18:16:04.418584  786188 start.go:300] post-start starting for "multinode-242095" (driver="docker")
	I0307 18:16:04.418597  786188 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:16:04.418664  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:16:04.418714  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.482678  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.570818  786188 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:16:04.573563  786188 command_runner.go:130] > NAME="Ubuntu"
	I0307 18:16:04.573584  786188 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0307 18:16:04.573590  786188 command_runner.go:130] > ID=ubuntu
	I0307 18:16:04.573597  786188 command_runner.go:130] > ID_LIKE=debian
	I0307 18:16:04.573613  786188 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0307 18:16:04.573620  786188 command_runner.go:130] > VERSION_ID="20.04"
	I0307 18:16:04.573628  786188 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0307 18:16:04.573634  786188 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0307 18:16:04.573642  786188 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0307 18:16:04.573654  786188 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0307 18:16:04.573665  786188 command_runner.go:130] > VERSION_CODENAME=focal
	I0307 18:16:04.573672  786188 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0307 18:16:04.573738  786188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:16:04.573754  786188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:16:04.573762  786188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:16:04.573770  786188 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0307 18:16:04.573780  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/addons for local assets ...
	I0307 18:16:04.573831  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/files for local assets ...
	I0307 18:16:04.573897  786188 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> 6427432.pem in /etc/ssl/certs
	I0307 18:16:04.573906  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /etc/ssl/certs/6427432.pem
	I0307 18:16:04.573977  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:16:04.580100  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:16:04.596589  786188 start.go:303] post-start completed in 177.993098ms
	I0307 18:16:04.596958  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:04.660545  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:04.660791  786188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:16:04.660836  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.724855  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.807361  786188 command_runner.go:130] > 16%!
	(MISSING)I0307 18:16:04.807427  786188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:16:04.811084  786188 command_runner.go:130] > 245G
	I0307 18:16:04.811118  786188 start.go:128] duration metric: createHost completed in 9.48390237s
	I0307 18:16:04.811127  786188 start.go:83] releasing machines lock for "multinode-242095", held for 9.484040996s
	I0307 18:16:04.811178  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:16:04.874778  786188 ssh_runner.go:195] Run: cat /version.json
	I0307 18:16:04.874829  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.874907  786188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:16:04.874998  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:04.943415  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:04.943867  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:05.058478  786188 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 18:16:05.059801  786188 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1677262057-15923", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
	I0307 18:16:05.059924  786188 ssh_runner.go:195] Run: systemctl --version
	I0307 18:16:05.063547  786188 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0307 18:16:05.063574  786188 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0307 18:16:05.063638  786188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:16:05.067045  786188 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0307 18:16:05.067062  786188 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0307 18:16:05.067068  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2131168     Links: 1
	I0307 18:16:05.067078  786188 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:16:05.067087  786188 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:16:05.067096  786188 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:16:05.067106  786188 command_runner.go:130] > Change: 2023-03-07 18:01:36.367924495 +0000
	I0307 18:16:05.067111  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:05.067281  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:16:05.086271  786188 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:16:05.086323  786188 ssh_runner.go:195] Run: which cri-dockerd
	I0307 18:16:05.088875  786188 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 18:16:05.089038  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 18:16:05.095410  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 18:16:05.107501  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:16:05.121355  786188 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0307 18:16:05.121405  786188 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 18:16:05.121420  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:16:05.121453  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:16:05.121557  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:16:05.133192  786188 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:16:05.133212  786188 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:16:05.133277  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:16:05.140361  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:16:05.147468  786188 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:16:05.147522  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:16:05.154740  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:16:05.161816  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:16:05.168791  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:16:05.176118  786188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:16:05.182721  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:16:05.189835  786188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:16:05.195321  786188 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 18:16:05.196318  786188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:16:05.203660  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:05.274487  786188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:16:05.357224  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:16:05.357276  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:16:05.357330  786188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 18:16:05.366182  786188 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0307 18:16:05.366278  786188 command_runner.go:130] > [Unit]
	I0307 18:16:05.366297  786188 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 18:16:05.366309  786188 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 18:16:05.366316  786188 command_runner.go:130] > BindsTo=containerd.service
	I0307 18:16:05.366326  786188 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0307 18:16:05.366338  786188 command_runner.go:130] > Wants=network-online.target
	I0307 18:16:05.366349  786188 command_runner.go:130] > Requires=docker.socket
	I0307 18:16:05.366357  786188 command_runner.go:130] > StartLimitBurst=3
	I0307 18:16:05.366367  786188 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 18:16:05.366375  786188 command_runner.go:130] > [Service]
	I0307 18:16:05.366383  786188 command_runner.go:130] > Type=notify
	I0307 18:16:05.366393  786188 command_runner.go:130] > Restart=on-failure
	I0307 18:16:05.366409  786188 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 18:16:05.366427  786188 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 18:16:05.366441  786188 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 18:16:05.366454  786188 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 18:16:05.366467  786188 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 18:16:05.366481  786188 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 18:16:05.366497  786188 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 18:16:05.366516  786188 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 18:16:05.366531  786188 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 18:16:05.366540  786188 command_runner.go:130] > ExecStart=
	I0307 18:16:05.366566  786188 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0307 18:16:05.366579  786188 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 18:16:05.366592  786188 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 18:16:05.366623  786188 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 18:16:05.366634  786188 command_runner.go:130] > LimitNOFILE=infinity
	I0307 18:16:05.366641  786188 command_runner.go:130] > LimitNPROC=infinity
	I0307 18:16:05.366650  786188 command_runner.go:130] > LimitCORE=infinity
	I0307 18:16:05.366660  786188 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 18:16:05.366671  786188 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 18:16:05.366680  786188 command_runner.go:130] > TasksMax=infinity
	I0307 18:16:05.366687  786188 command_runner.go:130] > TimeoutStartSec=0
	I0307 18:16:05.366711  786188 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 18:16:05.366723  786188 command_runner.go:130] > Delegate=yes
	I0307 18:16:05.366734  786188 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 18:16:05.366744  786188 command_runner.go:130] > KillMode=process
	I0307 18:16:05.366759  786188 command_runner.go:130] > [Install]
	I0307 18:16:05.366769  786188 command_runner.go:130] > WantedBy=multi-user.target
	I0307 18:16:05.367066  786188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0307 18:16:05.367125  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:16:05.377113  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:16:05.388695  786188 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:16:05.388722  786188 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:16:05.389568  786188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 18:16:05.495258  786188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 18:16:05.573651  786188 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 18:16:05.573691  786188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 18:16:05.602578  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:05.676496  786188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:16:05.880899  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:16:05.890011  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0307 18:16:05.957675  786188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 18:16:06.029343  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:16:06.104607  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:16:06.173711  786188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 18:16:06.184480  786188 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 18:16:06.184553  786188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 18:16:06.187420  786188 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 18:16:06.187459  786188 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 18:16:06.187469  786188 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0307 18:16:06.187480  786188 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0307 18:16:06.187495  786188 command_runner.go:130] > Access: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187504  786188 command_runner.go:130] > Modify: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187510  786188 command_runner.go:130] > Change: 2023-03-07 18:16:06.179386721 +0000
	I0307 18:16:06.187514  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:06.187532  786188 start.go:553] Will wait 60s for crictl version
	I0307 18:16:06.187576  786188 ssh_runner.go:195] Run: which crictl
	I0307 18:16:06.190013  786188 command_runner.go:130] > /usr/bin/crictl
	I0307 18:16:06.190159  786188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:16:06.266348  786188 command_runner.go:130] > Version:  0.1.0
	I0307 18:16:06.266372  786188 command_runner.go:130] > RuntimeName:  docker
	I0307 18:16:06.266379  786188 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0307 18:16:06.266388  786188 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 18:16:06.268109  786188 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0307 18:16:06.268178  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:16:06.290135  786188 command_runner.go:130] > 23.0.1
	I0307 18:16:06.290205  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:16:06.310464  786188 command_runner.go:130] > 23.0.1
	I0307 18:16:06.315381  786188 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0307 18:16:06.315483  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:16:06.381405  786188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0307 18:16:06.384852  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:16:06.393931  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:06.393983  786188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:16:06.410274  786188 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 18:16:06.410296  786188 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 18:16:06.410305  786188 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 18:16:06.410315  786188 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 18:16:06.410323  786188 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 18:16:06.410329  786188 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 18:16:06.410340  786188 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 18:16:06.410348  786188 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:06.411468  786188 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 18:16:06.411490  786188 docker.go:560] Images already preloaded, skipping extraction
	I0307 18:16:06.411547  786188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 18:16:06.427506  786188 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 18:16:06.427531  786188 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 18:16:06.427548  786188 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 18:16:06.427555  786188 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 18:16:06.427560  786188 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 18:16:06.427564  786188 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 18:16:06.427569  786188 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 18:16:06.427577  786188 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:06.428726  786188 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 18:16:06.428744  786188 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:16:06.428801  786188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 18:16:06.451260  786188 command_runner.go:130] > cgroupfs
	I0307 18:16:06.451332  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:16:06.451346  786188 cni.go:136] 1 nodes found, recommending kindnet
	I0307 18:16:06.451360  786188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:16:06.451385  786188 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-242095 NodeName:multinode-242095 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:16:06.451564  786188 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-242095"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:16:06.451664  786188 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-242095 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:16:06.451718  786188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 18:16:06.458500  786188 command_runner.go:130] > kubeadm
	I0307 18:16:06.458519  786188 command_runner.go:130] > kubectl
	I0307 18:16:06.458525  786188 command_runner.go:130] > kubelet
	I0307 18:16:06.458547  786188 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:16:06.458594  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:16:06.465190  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0307 18:16:06.477749  786188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:16:06.490028  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0307 18:16:06.502011  786188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:16:06.504682  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:16:06.513322  786188 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095 for IP: 192.168.58.2
	I0307 18:16:06.513347  786188 certs.go:186] acquiring lock for shared ca certs: {Name:mk6aa9dfc4b93dc10fe6d5a07411d8b3adb46804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.513489  786188 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key
	I0307 18:16:06.513530  786188 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key
	I0307 18:16:06.513587  786188 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key
	I0307 18:16:06.513600  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt with IP's: []
	I0307 18:16:06.751218  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt ...
	I0307 18:16:06.751252  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt: {Name:mk3556412664174b1430b247b49895322a37a5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.751419  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key ...
	I0307 18:16:06.751431  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key: {Name:mkde36e5a541677c98da0cfe15583bfe6e293f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.751526  786188 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041
	I0307 18:16:06.751540  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0307 18:16:06.906547  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 ...
	I0307 18:16:06.906577  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041: {Name:mk64e669a3624bbba51cf370217c5818c5e82f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.906717  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041 ...
	I0307 18:16:06.906727  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041: {Name:mkb0d02d3cff1f19e6e6f14f079c5837a5b6505a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.906782  786188 certs.go:333] copying /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt
	I0307 18:16:06.906848  786188 certs.go:337] copying /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key
	I0307 18:16:06.906907  786188 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key
	I0307 18:16:06.906920  786188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt with IP's: []
	I0307 18:16:06.959740  786188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt ...
	I0307 18:16:06.959765  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt: {Name:mk3368b479dd550d3cdec9cba98713d2e9e8e080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.959881  786188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key ...
	I0307 18:16:06.959895  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key: {Name:mkb1a5e254ccb5b4b9145f97db39e4f420a21824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:06.959961  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 18:16:06.959978  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 18:16:06.959987  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 18:16:06.959999  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 18:16:06.960012  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 18:16:06.960027  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 18:16:06.960040  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 18:16:06.960049  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 18:16:06.960107  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem (1338 bytes)
	W0307 18:16:06.960140  786188 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743_empty.pem, impossibly tiny 0 bytes
	I0307 18:16:06.960151  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:16:06.960178  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:16:06.960201  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:16:06.960224  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem (1679 bytes)
	I0307 18:16:06.960259  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:16:06.960283  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:06.960296  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem -> /usr/share/ca-certificates/642743.pem
	I0307 18:16:06.960307  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /usr/share/ca-certificates/6427432.pem
	I0307 18:16:06.960864  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 18:16:06.979312  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:16:06.995980  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:16:07.012899  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 18:16:07.029138  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:16:07.045508  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 18:16:07.061491  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:16:07.077436  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:16:07.093037  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:16:07.109204  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem --> /usr/share/ca-certificates/642743.pem (1338 bytes)
	I0307 18:16:07.125050  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /usr/share/ca-certificates/6427432.pem (1708 bytes)
	I0307 18:16:07.141153  786188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:16:07.152851  786188 ssh_runner.go:195] Run: openssl version
	I0307 18:16:07.157179  786188 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0307 18:16:07.157249  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6427432.pem && ln -fs /usr/share/ca-certificates/6427432.pem /etc/ssl/certs/6427432.pem"
	I0307 18:16:07.164041  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166774  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166844  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.166893  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6427432.pem
	I0307 18:16:07.171158  786188 command_runner.go:130] > 3ec20f2e
	I0307 18:16:07.171350  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6427432.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:16:07.178077  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:16:07.184784  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187383  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187494  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.187537  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:16:07.191793  786188 command_runner.go:130] > b5213941
	I0307 18:16:07.191828  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:16:07.198179  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/642743.pem && ln -fs /usr/share/ca-certificates/642743.pem /etc/ssl/certs/642743.pem"
	I0307 18:16:07.204766  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207382  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207433  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.207483  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/642743.pem
	I0307 18:16:07.211548  786188 command_runner.go:130] > 51391683
	I0307 18:16:07.211691  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/642743.pem /etc/ssl/certs/51391683.0"
	I0307 18:16:07.218110  786188 kubeadm.go:401] StartCluster: {Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:16:07.218215  786188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 18:16:07.234150  786188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:16:07.240431  786188 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0307 18:16:07.240449  786188 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0307 18:16:07.240454  786188 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0307 18:16:07.240493  786188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:16:07.247416  786188 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0307 18:16:07.247475  786188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:16:07.253606  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0307 18:16:07.253635  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0307 18:16:07.253644  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0307 18:16:07.253655  786188 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:16:07.253693  786188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:16:07.253730  786188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 18:16:07.297573  786188 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0307 18:16:07.297605  786188 command_runner.go:130] > [init] Using Kubernetes version: v1.26.2
	I0307 18:16:07.297666  786188 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 18:16:07.297677  786188 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 18:16:07.331515  786188 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:16:07.331546  786188 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:16:07.331618  786188 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:16:07.331628  786188 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:16:07.331663  786188 kubeadm.go:322] OS: Linux
	I0307 18:16:07.331669  786188 command_runner.go:130] > OS: Linux
	I0307 18:16:07.331708  786188 kubeadm.go:322] CGROUPS_CPU: enabled
	I0307 18:16:07.331714  786188 command_runner.go:130] > CGROUPS_CPU: enabled
	I0307 18:16:07.331752  786188 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0307 18:16:07.331762  786188 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0307 18:16:07.331834  786188 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0307 18:16:07.331841  786188 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0307 18:16:07.331878  786188 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0307 18:16:07.331884  786188 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0307 18:16:07.331927  786188 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0307 18:16:07.331933  786188 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0307 18:16:07.331992  786188 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0307 18:16:07.332022  786188 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0307 18:16:07.332085  786188 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0307 18:16:07.332094  786188 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0307 18:16:07.332167  786188 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0307 18:16:07.332188  786188 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0307 18:16:07.332234  786188 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0307 18:16:07.332243  786188 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0307 18:16:07.396137  786188 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:16:07.396168  786188 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:16:07.396300  786188 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:16:07.396328  786188 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:16:07.396420  786188 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:16:07.396431  786188 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:16:07.523937  786188 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:16:07.528351  786188 out.go:204]   - Generating certificates and keys ...
	I0307 18:16:07.523972  786188 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:16:07.528496  786188 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 18:16:07.528529  786188 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0307 18:16:07.528577  786188 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 18:16:07.528584  786188 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0307 18:16:07.717859  786188 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:16:07.717887  786188 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 18:16:07.849570  786188 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:16:07.849601  786188 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0307 18:16:07.969943  786188 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0307 18:16:07.969972  786188 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0307 18:16:08.173499  786188 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0307 18:16:08.173527  786188 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0307 18:16:08.313210  786188 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0307 18:16:08.313238  786188 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0307 18:16:08.313407  786188 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.313436  786188 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.446657  786188 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0307 18:16:08.446689  786188 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0307 18:16:08.446821  786188 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.446852  786188 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-242095] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0307 18:16:08.575809  786188 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:16:08.575834  786188 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 18:16:08.672952  786188 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:16:08.672981  786188 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 18:16:09.118621  786188 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0307 18:16:09.118692  786188 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0307 18:16:09.118763  786188 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:16:09.118784  786188 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:16:09.348007  786188 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:16:09.348035  786188 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:16:09.431312  786188 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:16:09.431346  786188 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:16:09.496928  786188 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:16:09.496955  786188 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:16:09.728080  786188 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:16:09.728113  786188 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:16:09.739432  786188 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:16:09.739481  786188 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:16:09.741537  786188 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:16:09.741564  786188 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:16:09.741635  786188 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 18:16:09.741651  786188 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 18:16:09.823426  786188 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:16:09.823487  786188 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:16:09.826138  786188 out.go:204]   - Booting up control plane ...
	I0307 18:16:09.826261  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:16:09.826317  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:16:09.826438  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:16:09.826471  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:16:09.827407  786188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:16:09.827430  786188 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:16:09.828192  786188 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:16:09.828211  786188 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:16:09.829986  786188 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:16:09.830005  786188 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:16:18.332163  786188 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502131 seconds
	I0307 18:16:18.332189  786188 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502131 seconds
	I0307 18:16:18.332349  786188 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:16:18.332374  786188 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 18:16:18.343799  786188 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:16:18.343822  786188 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 18:16:18.860223  786188 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:16:18.860256  786188 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0307 18:16:18.860461  786188 kubeadm.go:322] [mark-control-plane] Marking the node multinode-242095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:16:18.860473  786188 command_runner.go:130] > [mark-control-plane] Marking the node multinode-242095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 18:16:19.369037  786188 kubeadm.go:322] [bootstrap-token] Using token: r7749e.dyce20vphzwpiu0j
	I0307 18:16:19.370708  786188 out.go:204]   - Configuring RBAC rules ...
	I0307 18:16:19.369134  786188 command_runner.go:130] > [bootstrap-token] Using token: r7749e.dyce20vphzwpiu0j
	I0307 18:16:19.370854  786188 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:16:19.370873  786188 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 18:16:19.373850  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:16:19.373870  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 18:16:19.380490  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:16:19.380511  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 18:16:19.382957  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:16:19.382975  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 18:16:19.385405  786188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:16:19.385425  786188 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 18:16:19.387599  786188 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:16:19.387616  786188 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 18:16:19.396513  786188 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:16:19.396535  786188 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 18:16:19.611118  786188 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0307 18:16:19.611157  786188 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0307 18:16:19.795904  786188 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0307 18:16:19.795932  786188 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0307 18:16:19.797311  786188 kubeadm.go:322] 
	I0307 18:16:19.797419  786188 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0307 18:16:19.797445  786188 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0307 18:16:19.797452  786188 kubeadm.go:322] 
	I0307 18:16:19.797531  786188 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0307 18:16:19.797549  786188 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0307 18:16:19.797560  786188 kubeadm.go:322] 
	I0307 18:16:19.797592  786188 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0307 18:16:19.797599  786188 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0307 18:16:19.797655  786188 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:16:19.797668  786188 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 18:16:19.797733  786188 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:16:19.797741  786188 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 18:16:19.797744  786188 kubeadm.go:322] 
	I0307 18:16:19.797808  786188 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0307 18:16:19.797819  786188 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0307 18:16:19.797828  786188 kubeadm.go:322] 
	I0307 18:16:19.797910  786188 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:16:19.797928  786188 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 18:16:19.797938  786188 kubeadm.go:322] 
	I0307 18:16:19.798003  786188 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0307 18:16:19.798021  786188 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0307 18:16:19.798122  786188 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:16:19.798134  786188 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 18:16:19.798220  786188 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:16:19.798234  786188 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 18:16:19.798239  786188 kubeadm.go:322] 
	I0307 18:16:19.798355  786188 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:16:19.798379  786188 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0307 18:16:19.798487  786188 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0307 18:16:19.798500  786188 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0307 18:16:19.798505  786188 kubeadm.go:322] 
	I0307 18:16:19.798602  786188 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.798621  786188 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.798793  786188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb \
	I0307 18:16:19.798804  786188 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb \
	I0307 18:16:19.798822  786188 kubeadm.go:322] 	--control-plane 
	I0307 18:16:19.798843  786188 command_runner.go:130] > 	--control-plane 
	I0307 18:16:19.798860  786188 kubeadm.go:322] 
	I0307 18:16:19.798947  786188 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:16:19.798962  786188 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0307 18:16:19.798967  786188 kubeadm.go:322] 
	I0307 18:16:19.799074  786188 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.799084  786188 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token r7749e.dyce20vphzwpiu0j \
	I0307 18:16:19.799223  786188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:16:19.799235  786188 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:16:19.800966  786188 kubeadm.go:322] W0307 18:16:07.290185    1399 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:16:19.800980  786188 command_runner.go:130] ! W0307 18:16:07.290185    1399 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:16:19.801276  786188 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:16:19.801308  786188 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:16:19.801478  786188 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:16:19.801492  786188 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:16:19.801517  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:16:19.801539  786188 cni.go:136] 1 nodes found, recommending kindnet
	I0307 18:16:19.803644  786188 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 18:16:19.805244  786188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 18:16:19.809249  786188 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 18:16:19.809269  786188 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0307 18:16:19.809279  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2129263     Links: 1
	I0307 18:16:19.809288  786188 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:16:19.809296  786188 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:16:19.809303  786188 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:16:19.809310  786188 command_runner.go:130] > Change: 2023-03-07 18:01:35.631850484 +0000
	I0307 18:16:19.809328  786188 command_runner.go:130] >  Birth: -
	I0307 18:16:19.809695  786188 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 18:16:19.809714  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 18:16:19.826959  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 18:16:20.681840  786188 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0307 18:16:20.685894  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0307 18:16:20.696585  786188 command_runner.go:130] > serviceaccount/kindnet created
	I0307 18:16:20.704387  786188 command_runner.go:130] > daemonset.apps/kindnet created
	I0307 18:16:20.708153  786188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 18:16:20.708224  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.708268  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1 minikube.k8s.io/name=multinode-242095 minikube.k8s.io/updated_at=2023_03_07T18_16_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.801587  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0307 18:16:20.806057  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:20.811657  786188 command_runner.go:130] > node/multinode-242095 labeled
	I0307 18:16:20.814348  786188 command_runner.go:130] > -16
	I0307 18:16:20.814384  786188 ops.go:34] apiserver oom_adj: -16
	I0307 18:16:20.866875  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:21.370044  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:21.429400  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:21.870358  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:21.932610  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:22.370244  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:22.428897  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:22.869838  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:22.929182  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:23.370079  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:23.430666  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:23.870324  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:23.931572  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:24.369475  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:24.430078  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:24.869626  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:24.932327  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:25.370177  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:25.432463  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:25.870496  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:25.930432  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:26.370423  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:26.430173  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:26.869462  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:26.931621  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:27.370290  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:27.431618  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:27.870269  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:27.929680  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:28.369625  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:28.431090  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:28.869658  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:28.932242  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:29.369779  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:29.430964  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:29.869529  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:29.927813  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:30.370224  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:30.431125  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:30.869835  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:30.932913  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:31.369514  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:31.434726  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:31.870349  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:31.933361  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:32.369992  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:32.428660  786188 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0307 18:16:32.869680  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 18:16:33.004581  786188 command_runner.go:130] > NAME      SECRETS   AGE
	I0307 18:16:33.004606  786188 command_runner.go:130] > default   0         1s
	I0307 18:16:33.007272  786188 kubeadm.go:1073] duration metric: took 12.299093044s to wait for elevateKubeSystemPrivileges.
	I0307 18:16:33.007306  786188 kubeadm.go:403] StartCluster complete in 25.789200932s
	I0307 18:16:33.007327  786188 settings.go:142] acquiring lock: {Name:mk20aadaac3bdeaefa078eca20fd3af7c7410f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:33.007417  786188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.008372  786188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-636026/kubeconfig: {Name:mk9b5454025117fb515bc2f65b05f28b0fa10239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:16:33.008680  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 18:16:33.008762  786188 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0307 18:16:33.008951  786188 addons.go:66] Setting storage-provisioner=true in profile "multinode-242095"
	I0307 18:16:33.008955  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:33.008980  786188 addons.go:228] Setting addon storage-provisioner=true in "multinode-242095"
	I0307 18:16:33.009008  786188 addons.go:66] Setting default-storageclass=true in profile "multinode-242095"
	I0307 18:16:33.009040  786188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-242095"
	I0307 18:16:33.009048  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:16:33.009051  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.009416  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.009373  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:33.009591  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.010656  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.010675  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.010688  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.010699  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.010933  786188 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 18:16:33.020139  786188 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 18:16:33.020164  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.020175  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.020185  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.020195  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.020206  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.020219  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.020229  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.020242  786188 round_trippers.go:580]     Audit-Id: 18dae1a0-98d0-4016-8534-39090f93c347
	I0307 18:16:33.020275  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"314","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.020828  786188 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"314","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.020889  786188 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.020901  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.020913  786188 round_trippers.go:473]     Content-Type: application/json
	I0307 18:16:33.020923  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.020938  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.027536  786188 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 18:16:33.027560  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.027569  786188 round_trippers.go:580]     Audit-Id: 5ed54bf9-85de-475e-8220-d39207ace3fb
	I0307 18:16:33.027577  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.027585  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.027593  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.027602  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.027609  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.027617  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.027648  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"347","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.093612  786188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:16:33.092444  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:33.095725  786188 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:16:33.095748  786188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 18:16:33.095805  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:33.095928  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:33.096437  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0307 18:16:33.096455  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.096467  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.096477  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.099415  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:33.099434  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.099469  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.099479  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.099488  786188 round_trippers.go:580]     Content-Length: 109
	I0307 18:16:33.099495  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.099503  786188 round_trippers.go:580]     Audit-Id: 1710ddc7-6dd6-4d3f-8d35-374c5c4c9459
	I0307 18:16:33.099512  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.099519  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.099543  786188 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"356"},"items":[]}
	I0307 18:16:33.099908  786188 addons.go:228] Setting addon default-storageclass=true in "multinode-242095"
	I0307 18:16:33.099950  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:16:33.100356  786188 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:16:33.130155  786188 command_runner.go:130] > apiVersion: v1
	I0307 18:16:33.130178  786188 command_runner.go:130] > data:
	I0307 18:16:33.130185  786188 command_runner.go:130] >   Corefile: |
	I0307 18:16:33.130191  786188 command_runner.go:130] >     .:53 {
	I0307 18:16:33.130196  786188 command_runner.go:130] >         errors
	I0307 18:16:33.130210  786188 command_runner.go:130] >         health {
	I0307 18:16:33.130217  786188 command_runner.go:130] >            lameduck 5s
	I0307 18:16:33.130223  786188 command_runner.go:130] >         }
	I0307 18:16:33.130229  786188 command_runner.go:130] >         ready
	I0307 18:16:33.130239  786188 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0307 18:16:33.130249  786188 command_runner.go:130] >            pods insecure
	I0307 18:16:33.130257  786188 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0307 18:16:33.130268  786188 command_runner.go:130] >            ttl 30
	I0307 18:16:33.130280  786188 command_runner.go:130] >         }
	I0307 18:16:33.130289  786188 command_runner.go:130] >         prometheus :9153
	I0307 18:16:33.130297  786188 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0307 18:16:33.130308  786188 command_runner.go:130] >            max_concurrent 1000
	I0307 18:16:33.130317  786188 command_runner.go:130] >         }
	I0307 18:16:33.130323  786188 command_runner.go:130] >         cache 30
	I0307 18:16:33.130337  786188 command_runner.go:130] >         loop
	I0307 18:16:33.130343  786188 command_runner.go:130] >         reload
	I0307 18:16:33.130353  786188 command_runner.go:130] >         loadbalance
	I0307 18:16:33.130358  786188 command_runner.go:130] >     }
	I0307 18:16:33.130368  786188 command_runner.go:130] > kind: ConfigMap
	I0307 18:16:33.130374  786188 command_runner.go:130] > metadata:
	I0307 18:16:33.130387  786188 command_runner.go:130] >   creationTimestamp: "2023-03-07T18:16:19Z"
	I0307 18:16:33.130393  786188 command_runner.go:130] >   name: coredns
	I0307 18:16:33.130400  786188 command_runner.go:130] >   namespace: kube-system
	I0307 18:16:33.130413  786188 command_runner.go:130] >   resourceVersion: "227"
	I0307 18:16:33.130421  786188 command_runner.go:130] >   uid: 003916ba-54b9-48a3-a139-d67b66c9e19a
	I0307 18:16:33.132903  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 18:16:33.173017  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:33.179700  786188 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 18:16:33.179723  786188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 18:16:33.179767  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:16:33.256040  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:16:33.309654  786188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:16:33.414125  786188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:16:33.528413  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:16:33.528435  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:33.528446  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:33.528454  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:33.531099  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:33.531126  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:33.531136  786188 round_trippers.go:580]     Audit-Id: be9eea0f-beb3-4372-bb4f-6ddd8e760d4a
	I0307 18:16:33.531144  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:33.531153  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:33.531162  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:33.531171  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:33.531183  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:16:33.531192  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:33 GMT
	I0307 18:16:33.531216  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"357","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 18:16:33.531328  786188 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-242095" context rescaled to 1 replicas
	I0307 18:16:33.531362  786188 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 18:16:33.533721  786188 out.go:177] * Verifying Kubernetes components...
	I0307 18:16:33.535180  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:16:33.598827  786188 command_runner.go:130] > configmap/coredns replaced
	I0307 18:16:33.604492  786188 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0307 18:16:33.902262  786188 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0307 18:16:33.908095  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0307 18:16:33.918986  786188 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0307 18:16:33.924506  786188 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0307 18:16:33.929633  786188 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0307 18:16:33.937653  786188 command_runner.go:130] > pod/storage-provisioner created
	I0307 18:16:34.013590  786188 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0307 18:16:34.020473  786188 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 18:16:34.019024  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:16:34.022000  786188 addons.go:499] enable addons completed in 1.013235364s: enabled=[storage-provisioner default-storageclass]
	I0307 18:16:34.022312  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:16:34.022658  786188 node_ready.go:35] waiting up to 6m0s for node "multinode-242095" to be "Ready" ...
	I0307 18:16:34.022744  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:34.022754  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.022766  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.022780  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.024816  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:34.024838  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.024853  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.024861  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.024873  786188 round_trippers.go:580]     Audit-Id: 95988b2b-cc48-4f86-a80c-56c62fcb222d
	I0307 18:16:34.024881  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.024892  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.024901  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.025017  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:34.025791  786188 node_ready.go:49] node "multinode-242095" has status "Ready":"True"
	I0307 18:16:34.025812  786188 node_ready.go:38] duration metric: took 3.135062ms waiting for node "multinode-242095" to be "Ready" ...
	I0307 18:16:34.025824  786188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:16:34.025898  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:34.025908  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.025920  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.025933  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.032139  786188 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 18:16:34.032159  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.032170  786188 round_trippers.go:580]     Audit-Id: 127ad593-3e87-420f-9e8c-2f29b883dc26
	I0307 18:16:34.032179  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.032188  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.032194  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.032202  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.032210  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.032688  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"371"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"315","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54043 chars]
	I0307 18:16:34.036057  786188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:34.036117  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:34.036125  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.036132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.036141  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.037717  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:34.037732  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.037739  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.037746  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.037753  786188 round_trippers.go:580]     Audit-Id: 96f11db6-fe5f-45fb-83cd-a27fc7dfd3c0
	I0307 18:16:34.037761  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.037769  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.037782  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.037877  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"315","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4942 chars]
	I0307 18:16:34.539046  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:34.539071  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.539083  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.539093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.541551  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:34.541570  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.541580  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.541589  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.541598  786188 round_trippers.go:580]     Audit-Id: 6c2a6e23-98b8-4b4b-a5c3-b1c6d19f0546
	I0307 18:16:34.541607  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.541617  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.541629  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.541755  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:34.542405  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:34.542424  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:34.542435  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:34.542445  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:34.592850  786188 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0307 18:16:34.592874  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:34.592885  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:34.592894  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:34.592908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:34.592919  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:34.592926  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:34 GMT
	I0307 18:16:34.592941  786188 round_trippers.go:580]     Audit-Id: 930dcf65-da60-4cf4-871f-a2ef96d7130c
	I0307 18:16:34.593061  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:35.038941  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:35.038970  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.038983  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.038995  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.092510  786188 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0307 18:16:35.092619  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.092638  786188 round_trippers.go:580]     Audit-Id: 75656a56-97ec-41bc-994b-eabeeca6ae54
	I0307 18:16:35.092647  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.092660  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.092680  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.092690  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.092701  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.092899  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:35.093506  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:35.093524  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.093535  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.093543  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.095797  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:35.095818  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.095828  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.095837  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.095847  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.095859  786188 round_trippers.go:580]     Audit-Id: e8ef9595-626c-4b84-a62e-b336fca45f15
	I0307 18:16:35.095871  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.095880  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.096236  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:35.538805  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:35.538829  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.538842  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.538851  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.540906  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:35.540963  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.540983  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.540996  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.541008  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.541032  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.541048  786188 round_trippers.go:580]     Audit-Id: 65fbf7ef-d842-45d1-8ada-bf75b6b08a8e
	I0307 18:16:35.541060  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.541174  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:35.541688  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:35.541736  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:35.541755  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:35.541772  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:35.543245  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:35.543290  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:35.543301  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:35.543316  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:35.543322  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:35.543331  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:35 GMT
	I0307 18:16:35.543337  786188 round_trippers.go:580]     Audit-Id: 6ff30fe4-67ec-479b-abbc-c1a5100e8bd6
	I0307 18:16:35.543344  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:35.543426  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:36.039017  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:36.039036  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.039044  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.039050  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.041413  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:36.041439  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.041451  786188 round_trippers.go:580]     Audit-Id: f316d885-530a-4fd5-9378-d6e39f7715c2
	I0307 18:16:36.041463  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.041471  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.041480  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.041485  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.041493  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.041594  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"376","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6148 chars]
	I0307 18:16:36.042063  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:36.042076  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.042083  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.042089  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.043773  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:36.043797  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.043808  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.043815  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.043823  786188 round_trippers.go:580]     Audit-Id: 87c297f8-2b38-4aa4-8e80-f9ff3455c0a7
	I0307 18:16:36.043829  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.043839  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.043852  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.043945  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:36.044247  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:36.538556  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:36.538579  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.538588  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.538595  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.540695  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:36.540724  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.540737  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.540747  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.540769  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.540782  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.540793  786188 round_trippers.go:580]     Audit-Id: ad51122b-95e6-495c-a09c-8d5cce9b45c1
	I0307 18:16:36.540802  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.540949  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:36.541459  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:36.541473  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:36.541481  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:36.541494  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:36.543218  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:36.543237  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:36.543244  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:36.543250  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:36 GMT
	I0307 18:16:36.543256  786188 round_trippers.go:580]     Audit-Id: bd9810a4-0997-490e-9136-2b0fb74693b2
	I0307 18:16:36.543261  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:36.543266  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:36.543272  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:36.543382  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:37.039099  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:37.039120  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.039132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.039141  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.041110  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.041131  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.041141  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.041150  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.041158  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.041167  786188 round_trippers.go:580]     Audit-Id: 43aef3df-a06c-4af7-b96c-1b8913715a11
	I0307 18:16:37.041177  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.041183  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.041320  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:37.041883  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:37.041897  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.041905  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.041911  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.043476  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.043494  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.043502  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.043507  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.043512  786188 round_trippers.go:580]     Audit-Id: 371242cf-4c94-4474-81c3-4c1e580aac8d
	I0307 18:16:37.043517  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.043522  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.043528  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.043668  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:37.539318  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:37.539339  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.539347  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.539353  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.541278  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.541304  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.541315  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.541323  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.541330  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.541339  786188 round_trippers.go:580]     Audit-Id: 1ebb83bb-ca94-4695-bb9a-0fc47eb523f5
	I0307 18:16:37.541348  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.541359  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.541471  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:37.541930  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:37.541945  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:37.541955  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:37.541964  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:37.543553  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:37.543577  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:37.543587  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:37.543596  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:37 GMT
	I0307 18:16:37.543606  786188 round_trippers.go:580]     Audit-Id: 0f5de3e2-00f5-4285-afcd-791bf9969772
	I0307 18:16:37.543615  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:37.543629  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:37.543642  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:37.543721  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:38.039380  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:38.039399  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.039407  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.039414  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.041580  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:38.041600  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.041608  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.041618  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.041627  786188 round_trippers.go:580]     Audit-Id: 21b9ff99-dd20-4696-81bf-da2a37d2b427
	I0307 18:16:38.041635  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.041642  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.041651  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.041794  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:38.042235  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:38.042247  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.042254  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.042260  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.043906  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:38.043928  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.043938  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.043947  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.043953  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.043961  786188 round_trippers.go:580]     Audit-Id: 56ce60d0-3eeb-4764-ab92-416ca1dcdc4d
	I0307 18:16:38.043974  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.043984  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.044086  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:38.044391  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:38.538648  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:38.538679  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.538687  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.538694  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.540901  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:38.540921  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.540928  786188 round_trippers.go:580]     Audit-Id: 0e1e7500-e531-40b1-9878-ea3ed632cfad
	I0307 18:16:38.540934  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.540939  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.540947  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.540952  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.540958  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.541130  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:38.541616  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:38.541629  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:38.541637  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:38.541642  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:38.543494  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:38.543510  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:38.543517  786188 round_trippers.go:580]     Audit-Id: 86a81d6a-bd41-4ec0-8b9e-1010c9bd0054
	I0307 18:16:38.543522  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:38.543527  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:38.543533  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:38.543538  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:38.543543  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:38 GMT
	I0307 18:16:38.543691  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:39.039396  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:39.039415  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.039424  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.039430  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.041533  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:39.041554  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.041564  786188 round_trippers.go:580]     Audit-Id: d27a8536-1bb4-415d-811e-e800bfd341cc
	I0307 18:16:39.041571  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.041579  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.041588  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.041597  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.041608  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.041720  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:39.042160  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:39.042175  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.042185  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.042194  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.043832  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:39.043853  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.043863  786188 round_trippers.go:580]     Audit-Id: 46c2e535-925f-4a18-b132-a60ef44d418a
	I0307 18:16:39.043874  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.043887  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.043896  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.043908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.043920  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.043998  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:39.539416  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:39.539436  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.539457  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.539463  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.541530  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:39.541550  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.541558  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.541563  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.541568  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.541574  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.541579  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.541584  786188 round_trippers.go:580]     Audit-Id: 3fbc3dea-be1d-43c9-a388-a91c3deee502
	I0307 18:16:39.541691  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:39.542141  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:39.542154  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:39.542161  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:39.542167  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:39.543812  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:39.543837  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:39.543848  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:39.543872  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:39.543883  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:39 GMT
	I0307 18:16:39.543901  786188 round_trippers.go:580]     Audit-Id: c0f988dd-97c0-42e1-bf1b-5f1cacdd29c5
	I0307 18:16:39.543910  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:39.543922  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:39.544058  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:40.038850  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:40.038877  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.038890  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.038904  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.041547  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:40.041574  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.041585  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.041594  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.041602  786188 round_trippers.go:580]     Audit-Id: 4fe7b7f9-ee66-4c02-9404-98c1deada2fb
	I0307 18:16:40.041648  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.041665  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.041677  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.041832  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:40.042421  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:40.042437  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.042449  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.042459  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.044460  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:40.044480  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.044489  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.044498  786188 round_trippers.go:580]     Audit-Id: aadfb225-8e4b-40b0-97ef-d8643516152e
	I0307 18:16:40.044511  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.044523  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.044535  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.044544  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.044640  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"339","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:40.044986  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:40.538862  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:40.538884  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.538892  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.538898  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.541026  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:40.541049  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.541060  786188 round_trippers.go:580]     Audit-Id: b63003f7-b641-43cb-a2bc-c412ced88bfe
	I0307 18:16:40.541070  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.541079  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.541089  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.541095  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.541103  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.541233  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:40.541666  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:40.541677  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:40.541686  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:40.541692  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:40.543485  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:40.543507  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:40.543518  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:40 GMT
	I0307 18:16:40.543524  786188 round_trippers.go:580]     Audit-Id: 005c40f7-ef5f-45fe-967d-830436f2e2d3
	I0307 18:16:40.543533  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:40.543546  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:40.543563  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:40.543575  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:40.543666  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:41.039191  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:41.039210  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.039218  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.039224  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.041773  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:41.041799  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.041811  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.041821  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.041830  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.041839  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.041847  786188 round_trippers.go:580]     Audit-Id: 1b9ae6b0-86a8-47ba-9cfa-c91468a20ab0
	I0307 18:16:41.041859  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.041974  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:41.042551  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:41.042567  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.042577  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.042587  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.044251  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:41.044272  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.044283  786188 round_trippers.go:580]     Audit-Id: 57ffdca1-4313-4841-aa1f-2c8a948d2d7a
	I0307 18:16:41.044293  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.044302  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.044313  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.044324  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.044337  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.044440  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:41.539116  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:41.539141  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.539153  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.539163  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.541492  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:41.541515  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.541526  786188 round_trippers.go:580]     Audit-Id: 34e58c83-0a75-4c9d-bbe3-6941e0ac5926
	I0307 18:16:41.541533  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.541538  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.541544  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.541549  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.541566  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.541687  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:41.542199  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:41.542213  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:41.542220  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:41.542226  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:41.544216  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:41.544242  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:41.544259  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:41.544269  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:41.544278  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:41.544295  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:41 GMT
	I0307 18:16:41.544306  786188 round_trippers.go:580]     Audit-Id: 1406e5b4-7587-489a-9564-f4c01cd42c72
	I0307 18:16:41.544315  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:41.544434  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:42.039046  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:42.039069  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.039077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.039083  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.041918  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.041943  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.041952  786188 round_trippers.go:580]     Audit-Id: 09cae67b-5d59-498f-9f3a-5f2aa9854097
	I0307 18:16:42.041958  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.041967  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.041979  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.041985  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.041991  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.042125  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:42.042621  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:42.042638  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.042645  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.042652  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.044736  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.044761  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.044772  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.044780  786188 round_trippers.go:580]     Audit-Id: 2210cdb7-4537-4678-87c1-10faba987678
	I0307 18:16:42.044794  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.044805  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.044814  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.044826  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.044912  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:42.045234  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:42.538501  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:42.538521  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.538530  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.538536  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.540698  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:42.540720  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.540730  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.540738  786188 round_trippers.go:580]     Audit-Id: 6ecd4dd9-6bc5-48e5-a213-a5322408d9ef
	I0307 18:16:42.540747  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.540759  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.540769  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.540784  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.540923  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:42.541376  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:42.541391  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:42.541401  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:42.541409  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:42.543175  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:42.543198  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:42.543209  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:42.543219  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:42 GMT
	I0307 18:16:42.543232  786188 round_trippers.go:580]     Audit-Id: 2f26f49c-62e9-4167-b2a5-aba25eb53a8e
	I0307 18:16:42.543241  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:42.543254  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:42.543271  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:42.543373  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:43.039276  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:43.039302  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.039312  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.039320  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.041703  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:43.041727  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.041737  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.041747  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.041756  786188 round_trippers.go:580]     Audit-Id: 1591d2ff-79a9-42ca-83a9-1c79298fa1fd
	I0307 18:16:43.041765  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.041774  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.041782  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.041990  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:43.042596  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:43.042613  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.042620  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.042626  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.044545  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:43.044564  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.044571  786188 round_trippers.go:580]     Audit-Id: cb2c3ecd-00e8-442e-99a2-674ce54494e5
	I0307 18:16:43.044578  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.044586  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.044595  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.044604  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.044614  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.044716  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:43.539401  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:43.539422  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.539430  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.539437  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.541635  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:43.541659  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.541670  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.541680  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.541687  786188 round_trippers.go:580]     Audit-Id: 663a5b66-807d-442f-8515-f3c013b511c5
	I0307 18:16:43.541692  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.541701  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.541706  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.541816  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:43.542261  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:43.542272  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:43.542279  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:43.542285  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:43.543978  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:43.544000  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:43.544011  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:43.544019  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:43.544028  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:43.544044  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:43 GMT
	I0307 18:16:43.544054  786188 round_trippers.go:580]     Audit-Id: aff46b5a-2d10-4afe-a3ef-4ca592c814cf
	I0307 18:16:43.544063  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:43.544152  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.038525  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:44.038549  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.038557  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.038563  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.041679  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:44.041715  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.041727  786188 round_trippers.go:580]     Audit-Id: 30e5b258-5981-4115-aa0c-d7db74b0c671
	I0307 18:16:44.041737  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.041750  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.041762  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.041770  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.041789  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.041924  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:44.042566  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:44.042585  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.042597  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.042607  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.044455  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:44.044476  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.044487  786188 round_trippers.go:580]     Audit-Id: 061059bc-5a15-4f0c-8f0d-fba489414563
	I0307 18:16:44.044499  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.044508  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.044516  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.044529  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.044542  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.044638  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.539243  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:44.539264  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.539276  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.539284  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.542333  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:44.542361  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.542373  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.542382  786188 round_trippers.go:580]     Audit-Id: fe0fcfdc-0b3d-443b-97bc-47bc4bb21ac1
	I0307 18:16:44.542391  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.542400  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.542409  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.542422  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.542575  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:44.543223  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:44.543243  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:44.543255  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:44.543264  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:44.545242  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:44.545263  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:44.545272  786188 round_trippers.go:580]     Audit-Id: 8f5a948e-bb7d-429d-ba18-9eb4449c7726
	I0307 18:16:44.545282  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:44.545291  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:44.545300  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:44.545313  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:44.545325  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:44 GMT
	I0307 18:16:44.545421  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:44.545812  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:45.038964  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:45.038997  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.039006  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.039012  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.041416  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.041441  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.041453  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.041463  786188 round_trippers.go:580]     Audit-Id: a4c0a5c3-7382-4925-9172-fb42b4bc46f8
	I0307 18:16:45.041476  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.041485  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.041495  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.041504  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.041655  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:45.042281  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:45.042384  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.042410  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.042423  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.044442  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.044464  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.044480  786188 round_trippers.go:580]     Audit-Id: 0e04065e-c13f-4508-9dab-3cd28224a402
	I0307 18:16:45.044494  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.044503  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.044512  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.044524  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.044533  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.044612  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:45.539308  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:45.539335  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.539348  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.539359  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.541728  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:45.541756  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.541768  786188 round_trippers.go:580]     Audit-Id: 68a8f2d5-6651-4ddb-8fb1-1afd9a80eb67
	I0307 18:16:45.541777  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.541786  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.541794  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.541803  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.541818  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.541972  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:45.542602  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:45.542620  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:45.542632  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:45.542642  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:45.544632  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:45.544654  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:45.544664  786188 round_trippers.go:580]     Audit-Id: 7ce41dea-e316-4dfd-b933-bdb780c67575
	I0307 18:16:45.544674  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:45.544685  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:45.544693  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:45.544703  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:45.544715  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:45 GMT
	I0307 18:16:45.544807  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:46.039433  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:46.039472  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.039484  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.039493  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.041854  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:46.041879  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.041890  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.041900  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.041908  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.041916  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.041926  786188 round_trippers.go:580]     Audit-Id: 0a85ac51-7161-4322-ac85-3704bc133b0f
	I0307 18:16:46.041934  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.042053  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:46.042528  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:46.042544  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.042551  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.042557  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.044475  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:46.044495  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.044505  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.044514  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.044527  786188 round_trippers.go:580]     Audit-Id: a68b33cd-ec77-4bd5-9781-f8a65738b7a5
	I0307 18:16:46.044540  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.044553  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.044565  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.044652  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:46.539175  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:46.539196  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.539204  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.539210  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.541558  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:46.541584  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.541594  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.541603  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.541612  786188 round_trippers.go:580]     Audit-Id: fa847503-669d-4969-b6b6-f5bf0611aef3
	I0307 18:16:46.541625  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.541633  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.541646  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.541765  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:46.542251  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:46.542272  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:46.542279  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:46.542285  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:46.544285  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:46.544304  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:46.544312  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:46.544317  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:46.544324  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:46.544329  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:46 GMT
	I0307 18:16:46.544334  786188 round_trippers.go:580]     Audit-Id: 453766dc-0cd9-4548-ac7c-306860d30ad8
	I0307 18:16:46.544343  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:46.544418  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:47.038684  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:47.038706  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.038714  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.038723  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.041140  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.041166  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.041177  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.041186  786188 round_trippers.go:580]     Audit-Id: f92c9990-7f4b-4b4b-adb4-9406cde5fe8d
	I0307 18:16:47.041195  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.041205  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.041214  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.041223  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.041350  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:47.041941  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:47.041958  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.041969  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.041980  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.044059  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.044094  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.044108  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.044119  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.044128  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.044145  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.044153  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.044161  786188 round_trippers.go:580]     Audit-Id: 0feed19c-98ea-4ed3-a16d-c190f24d4149
	I0307 18:16:47.044257  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:47.044552  786188 pod_ready.go:102] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"False"
	I0307 18:16:47.538830  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:47.538857  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.538870  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.538880  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.541255  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:47.541281  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.541292  786188 round_trippers.go:580]     Audit-Id: b79a5abf-8ae5-44f0-87bc-d37d875a4f13
	I0307 18:16:47.541301  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.541315  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.541325  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.541339  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.541351  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.541491  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:47.542156  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:47.542176  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:47.542188  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:47.542204  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:47.544022  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:47.544047  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:47.544059  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:47.544069  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:47.544079  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:47.544092  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:47.544105  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:47 GMT
	I0307 18:16:47.544118  786188 round_trippers.go:580]     Audit-Id: 7ab8337a-94bc-4c94-8a32-ccd298b625b8
	I0307 18:16:47.544230  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:48.038660  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:48.038680  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.038688  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.038695  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.040939  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:48.040964  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.040972  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.040978  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.040988  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.040993  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.040999  786188 round_trippers.go:580]     Audit-Id: 3a1bc639-b1c3-46e8-a833-e8bc1ce08d19
	I0307 18:16:48.041005  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.041136  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:48.041556  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:48.041566  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.041573  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.041579  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.043426  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:48.043470  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.043481  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.043494  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.043507  786188 round_trippers.go:580]     Audit-Id: 9f7b7e61-6ef6-4227-a75b-027c76be73a3
	I0307 18:16:48.043517  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.043526  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.043543  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.043650  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:48.539326  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:48.539354  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.539365  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.539375  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.541854  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:48.541879  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.541889  786188 round_trippers.go:580]     Audit-Id: 864837ff-53a7-4433-aa6e-d4c9f930f2d0
	I0307 18:16:48.541896  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.541905  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.541919  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.541932  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.541944  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.542055  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:48.542552  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:48.542565  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:48.542572  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:48.542580  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:48.544513  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:48.544535  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:48.544546  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:48 GMT
	I0307 18:16:48.544555  786188 round_trippers.go:580]     Audit-Id: c30f4cdc-aa33-474b-b0db-9973aaa44d24
	I0307 18:16:48.544564  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:48.544577  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:48.544590  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:48.544603  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:48.544704  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.039298  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:49.039319  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.039327  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.039334  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.041443  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.041472  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.041483  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.041491  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.041499  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.041508  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.041518  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.041532  786188 round_trippers.go:580]     Audit-Id: a685f77e-6956-4862-b93f-2a81f21a6fdb
	I0307 18:16:49.041692  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"395","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6377 chars]
	I0307 18:16:49.042163  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.042178  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.042185  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.042191  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.043884  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.043902  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.043909  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.043914  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.043920  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.043928  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.043936  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.043948  786188 round_trippers.go:580]     Audit-Id: d454af41-3b81-4a02-b271-c24acd9134c0
	I0307 18:16:49.044021  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.538626  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:16:49.538652  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.538663  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.538671  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.540878  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.540901  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.540909  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.540915  786188 round_trippers.go:580]     Audit-Id: e9c8658f-eca8-4ae5-a1b7-d39c054acd3c
	I0307 18:16:49.540921  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.540930  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.540939  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.540951  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.541100  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I0307 18:16:49.541578  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.541595  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.541602  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.541608  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.543469  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.543490  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.543500  786188 round_trippers.go:580]     Audit-Id: 2395a8f3-e518-41dd-9763-2c672cd86c7d
	I0307 18:16:49.543509  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.543522  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.543532  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.543545  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.543557  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.543651  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.544022  786188 pod_ready.go:92] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.544046  786188 pod_ready.go:81] duration metric: took 15.507969037s waiting for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.544059  786188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.544111  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-242095
	I0307 18:16:49.544120  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.544132  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.544143  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.545761  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.545784  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.545798  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.545808  786188 round_trippers.go:580]     Audit-Id: 54794421-d181-48ac-aab6-70ba00969f87
	I0307 18:16:49.545821  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.545829  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.545842  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.545854  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.545947  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-242095","namespace":"kube-system","uid":"58a90a44-38a6-4150-b6a5-d68e1257f6f3","resourceVersion":"286","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.mirror":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.seen":"2023-03-07T18:16:19.703879850Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0307 18:16:49.546319  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.546331  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.546338  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.546344  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.547861  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.547882  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.547892  786188 round_trippers.go:580]     Audit-Id: a2a1d732-596c-4a09-9041-2423d5dfe69d
	I0307 18:16:49.547901  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.547910  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.547924  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.547930  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.547938  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.548017  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.548258  786188 pod_ready.go:92] pod "etcd-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.548267  786188 pod_ready.go:81] duration metric: took 4.199405ms waiting for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.548277  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.548313  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-242095
	I0307 18:16:49.548321  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.548328  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.548335  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.549751  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.549771  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.549781  786188 round_trippers.go:580]     Audit-Id: 31b988a0-e19a-4a4a-9210-ef791639f50e
	I0307 18:16:49.549789  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.549799  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.549812  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.549823  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.549830  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.549928  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-242095","namespace":"kube-system","uid":"17d64e05-257c-45b2-bec2-6b363cbfb788","resourceVersion":"293","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.mirror":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.seen":"2023-03-07T18:16:19.703896620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0307 18:16:49.550276  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.550286  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.550293  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.550299  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.551830  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.551846  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.551853  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.551858  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.551863  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.551869  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.551878  786188 round_trippers.go:580]     Audit-Id: 6ebf0e78-f622-48f1-985b-a745502de4f5
	I0307 18:16:49.551888  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.551988  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.552234  786188 pod_ready.go:92] pod "kube-apiserver-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.552244  786188 pod_ready.go:81] duration metric: took 3.961927ms waiting for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.552252  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.552294  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-242095
	I0307 18:16:49.552306  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.552313  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.552319  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.553790  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.553814  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.553824  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.553834  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.553842  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.553856  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.553862  786188 round_trippers.go:580]     Audit-Id: 58e08f57-1904-48f8-88b7-578a6c1ffe50
	I0307 18:16:49.553867  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.553952  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-242095","namespace":"kube-system","uid":"536246ee-9384-411a-bd3a-a3f3862a51bc","resourceVersion":"291","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.mirror":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.seen":"2023-03-07T18:16:19.703897932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0307 18:16:49.554303  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.554313  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.554320  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.554326  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.555536  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.555557  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.555568  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.555577  786188 round_trippers.go:580]     Audit-Id: 5b30a3de-20f2-4e12-96c8-8bd1e1a41b54
	I0307 18:16:49.555590  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.555602  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.555615  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.555637  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.555723  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.555975  786188 pod_ready.go:92] pod "kube-controller-manager-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.555986  786188 pod_ready.go:81] duration metric: took 3.729298ms waiting for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.555993  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.556030  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:16:49.556037  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.556043  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.556050  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.557406  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.557429  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.557439  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.557448  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.557458  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.557470  786188 round_trippers.go:580]     Audit-Id: 4e917739-9a9c-4dc1-b8c4-3a0ca6408ea4
	I0307 18:16:49.557480  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.557493  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.557605  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rjsmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"c20d9dc5-69a3-46f9-bdd7-7a54def58eac","resourceVersion":"382","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0307 18:16:49.557945  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.557956  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.557963  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.557974  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.559337  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.559358  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.559368  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.559377  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.559390  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.559410  786188 round_trippers.go:580]     Audit-Id: 106d7541-bfed-4e3d-8aaa-483c2ab73fbf
	I0307 18:16:49.559419  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.559427  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.559531  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.559764  786188 pod_ready.go:92] pod "kube-proxy-rjsmj" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.559777  786188 pod_ready.go:81] duration metric: took 3.777889ms waiting for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.559786  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.739167  786188 request.go:622] Waited for 179.320938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:16:49.739221  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:16:49.739226  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.739234  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.739244  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.741119  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:49.741146  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.741157  786188 round_trippers.go:580]     Audit-Id: 06997025-cfb8-4b53-b1ba-2f1d1ce94405
	I0307 18:16:49.741163  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.741169  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.741178  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.741184  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.741192  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.741282  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-242095","namespace":"kube-system","uid":"bd31dd93-d9b4-4f7a-9d31-d15d68702789","resourceVersion":"282","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.mirror":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.seen":"2023-03-07T18:16:19.703898726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0307 18:16:49.938682  786188 request.go:622] Waited for 196.990276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.938754  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:16:49.938759  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.938767  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.938776  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.940976  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:49.940994  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.941002  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.941008  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.941020  786188 round_trippers.go:580]     Audit-Id: 98b81c03-a2ad-45a5-a79c-c2eb53225ba9
	I0307 18:16:49.941034  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.941046  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.941057  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.941176  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"401","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0307 18:16:49.941463  786188 pod_ready.go:92] pod "kube-scheduler-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:16:49.941473  786188 pod_ready.go:81] duration metric: took 381.681393ms waiting for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:16:49.941484  786188 pod_ready.go:38] duration metric: took 15.915650238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:16:49.941506  786188 api_server.go:51] waiting for apiserver process to appear ...
	I0307 18:16:49.941544  786188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:16:49.950433  786188 command_runner.go:130] > 2099
	I0307 18:16:49.951074  786188 api_server.go:71] duration metric: took 16.419676095s to wait for apiserver process to appear ...
	I0307 18:16:49.951090  786188 api_server.go:87] waiting for apiserver healthz status ...
	I0307 18:16:49.951099  786188 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 18:16:49.955641  786188 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 18:16:49.955704  786188 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0307 18:16:49.955713  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:49.955721  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:49.955730  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:49.956295  786188 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 18:16:49.956312  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:49.956323  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:49.956338  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:49.956351  786188 round_trippers.go:580]     Content-Length: 263
	I0307 18:16:49.956364  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:49 GMT
	I0307 18:16:49.956378  786188 round_trippers.go:580]     Audit-Id: 511ba1cb-11e2-49d8-8b94-49c70811be91
	I0307 18:16:49.956388  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:49.956397  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:49.956421  786188 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 18:16:49.956504  786188 api_server.go:140] control plane version: v1.26.2
	I0307 18:16:49.956521  786188 api_server.go:130] duration metric: took 5.425304ms to wait for apiserver health ...
	I0307 18:16:49.956534  786188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 18:16:50.138945  786188 request.go:622] Waited for 182.324216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.138996  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.139001  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.139008  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.139015  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.141960  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:50.141988  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.142000  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.142009  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.142017  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.142026  786188 round_trippers.go:580]     Audit-Id: 6f9f4ad5-f828-4996-8973-46c1a9a9c095
	I0307 18:16:50.142036  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.142049  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.143176  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55540 chars]
	I0307 18:16:50.145636  786188 system_pods.go:59] 8 kube-system pods found
	I0307 18:16:50.145662  786188 system_pods.go:61] "coredns-787d4945fb-fsll9" [17db7207-f2ce-4566-85fc-dc7e0eb65d09] Running
	I0307 18:16:50.145667  786188 system_pods.go:61] "etcd-multinode-242095" [58a90a44-38a6-4150-b6a5-d68e1257f6f3] Running
	I0307 18:16:50.145671  786188 system_pods.go:61] "kindnet-4sm84" [c406577e-74d2-4d81-b8a4-c827a78e2d61] Running
	I0307 18:16:50.145675  786188 system_pods.go:61] "kube-apiserver-multinode-242095" [17d64e05-257c-45b2-bec2-6b363cbfb788] Running
	I0307 18:16:50.145679  786188 system_pods.go:61] "kube-controller-manager-multinode-242095" [536246ee-9384-411a-bd3a-a3f3862a51bc] Running
	I0307 18:16:50.145683  786188 system_pods.go:61] "kube-proxy-rjsmj" [c20d9dc5-69a3-46f9-bdd7-7a54def58eac] Running
	I0307 18:16:50.145687  786188 system_pods.go:61] "kube-scheduler-multinode-242095" [bd31dd93-d9b4-4f7a-9d31-d15d68702789] Running
	I0307 18:16:50.145690  786188 system_pods.go:61] "storage-provisioner" [ea1890f3-3928-474e-8b2d-10da6a0e9f14] Running
	I0307 18:16:50.145696  786188 system_pods.go:74] duration metric: took 189.152884ms to wait for pod list to return data ...
	I0307 18:16:50.145703  786188 default_sa.go:34] waiting for default service account to be created ...
	I0307 18:16:50.339115  786188 request.go:622] Waited for 193.345838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0307 18:16:50.339183  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0307 18:16:50.339192  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.339199  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.339206  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.341266  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:16:50.341285  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.341293  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.341299  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.341304  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.341310  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.341316  786188 round_trippers.go:580]     Content-Length: 261
	I0307 18:16:50.341321  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.341327  786188 round_trippers.go:580]     Audit-Id: a800dba8-449d-434d-9dd2-ffe846382bf4
	I0307 18:16:50.341358  786188 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"46302e89-7181-491f-ae53-45d5e8f31c31","resourceVersion":"303","creationTimestamp":"2023-03-07T18:16:32Z"}}]}
	I0307 18:16:50.341550  786188 default_sa.go:45] found service account: "default"
	I0307 18:16:50.341562  786188 default_sa.go:55] duration metric: took 195.853934ms for default service account to be created ...
	I0307 18:16:50.341569  786188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 18:16:50.538993  786188 request.go:622] Waited for 197.357696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.539055  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:16:50.539063  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.539077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.539093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.542124  786188 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 18:16:50.542146  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.542153  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.542159  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.542165  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.542174  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.542183  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.542192  786188 round_trippers.go:580]     Audit-Id: f143b7a5-5b3e-4c43-a610-e314527d137f
	I0307 18:16:50.542590  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55540 chars]
	I0307 18:16:50.544985  786188 system_pods.go:86] 8 kube-system pods found
	I0307 18:16:50.545011  786188 system_pods.go:89] "coredns-787d4945fb-fsll9" [17db7207-f2ce-4566-85fc-dc7e0eb65d09] Running
	I0307 18:16:50.545019  786188 system_pods.go:89] "etcd-multinode-242095" [58a90a44-38a6-4150-b6a5-d68e1257f6f3] Running
	I0307 18:16:50.545028  786188 system_pods.go:89] "kindnet-4sm84" [c406577e-74d2-4d81-b8a4-c827a78e2d61] Running
	I0307 18:16:50.545041  786188 system_pods.go:89] "kube-apiserver-multinode-242095" [17d64e05-257c-45b2-bec2-6b363cbfb788] Running
	I0307 18:16:50.545048  786188 system_pods.go:89] "kube-controller-manager-multinode-242095" [536246ee-9384-411a-bd3a-a3f3862a51bc] Running
	I0307 18:16:50.545057  786188 system_pods.go:89] "kube-proxy-rjsmj" [c20d9dc5-69a3-46f9-bdd7-7a54def58eac] Running
	I0307 18:16:50.545064  786188 system_pods.go:89] "kube-scheduler-multinode-242095" [bd31dd93-d9b4-4f7a-9d31-d15d68702789] Running
	I0307 18:16:50.545070  786188 system_pods.go:89] "storage-provisioner" [ea1890f3-3928-474e-8b2d-10da6a0e9f14] Running
	I0307 18:16:50.545082  786188 system_pods.go:126] duration metric: took 203.507042ms to wait for k8s-apps to be running ...
	I0307 18:16:50.545097  786188 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:16:50.545147  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:16:50.554639  786188 system_svc.go:56] duration metric: took 9.5373ms WaitForService to wait for kubelet.
	I0307 18:16:50.554664  786188 kubeadm.go:578] duration metric: took 17.023267126s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 18:16:50.554689  786188 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:16:50.739111  786188 request.go:622] Waited for 184.334925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0307 18:16:50.739161  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0307 18:16:50.739165  786188 round_trippers.go:469] Request Headers:
	I0307 18:16:50.739174  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:16:50.739180  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:16:50.741195  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:16:50.741234  786188 round_trippers.go:577] Response Headers:
	I0307 18:16:50.741247  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:16:50.741262  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:16:50.741269  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:16:50.741275  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:16:50 GMT
	I0307 18:16:50.741284  786188 round_trippers.go:580]     Audit-Id: d471e32b-a91e-429d-acc8-aa5265b543e3
	I0307 18:16:50.741290  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:16:50.741429  786188 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5214 chars]
	I0307 18:16:50.741933  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:16:50.741958  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:16:50.741973  786188 node_conditions.go:105] duration metric: took 187.279803ms to run NodePressure ...
	I0307 18:16:50.741990  786188 start.go:228] waiting for startup goroutines ...
	I0307 18:16:50.742004  786188 start.go:233] waiting for cluster config update ...
	I0307 18:16:50.742021  786188 start.go:242] writing updated cluster config ...
	I0307 18:16:50.744145  786188 out.go:177] 
	I0307 18:16:50.745952  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:50.746044  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:50.747936  786188 out.go:177] * Starting worker node multinode-242095-m02 in cluster multinode-242095
	I0307 18:16:50.749260  786188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0307 18:16:50.750747  786188 out.go:177] * Pulling base image ...
	I0307 18:16:50.752517  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:50.752535  786188 cache.go:57] Caching tarball of preloaded images
	I0307 18:16:50.752545  786188 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon
	I0307 18:16:50.752624  786188 preload.go:174] Found /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 18:16:50.752639  786188 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 18:16:50.752729  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:16:50.815674  786188 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 in local docker daemon, skipping pull
	I0307 18:16:50.815700  786188 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 exists in daemon, skipping load
	I0307 18:16:50.815722  786188 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:16:50.815760  786188 start.go:364] acquiring machines lock for multinode-242095-m02: {Name:mk9ddc5dde012548a60ee1487f1c4b2a77a956b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:16:50.815871  786188 start.go:368] acquired machines lock for "multinode-242095-m02" in 86.682µs
	I0307 18:16:50.815899  786188 start.go:93] Provisioning new machine with config: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:16:50.815979  786188 start.go:125] createHost starting for "m02" (driver="docker")
	I0307 18:16:50.818373  786188 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 18:16:50.818510  786188 start.go:159] libmachine.API.Create for "multinode-242095" (driver="docker")
	I0307 18:16:50.818543  786188 client.go:168] LocalClient.Create starting
	I0307 18:16:50.818621  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem
	I0307 18:16:50.818668  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:16:50.818694  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:16:50.818768  786188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem
	I0307 18:16:50.818795  786188 main.go:141] libmachine: Decoding PEM data...
	I0307 18:16:50.818812  786188 main.go:141] libmachine: Parsing certificate...
	I0307 18:16:50.819015  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:16:50.879143  786188 network_create.go:76] Found existing network {name:multinode-242095 subnet:0xc001047b00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0307 18:16:50.879182  786188 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-242095-m02" container
	I0307 18:16:50.879237  786188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 18:16:50.941133  786188 cli_runner.go:164] Run: docker volume create multinode-242095-m02 --label name.minikube.sigs.k8s.io=multinode-242095-m02 --label created_by.minikube.sigs.k8s.io=true
	I0307 18:16:51.003964  786188 oci.go:103] Successfully created a docker volume multinode-242095-m02
	I0307 18:16:51.004065  786188 cli_runner.go:164] Run: docker run --rm --name multinode-242095-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095-m02 --entrypoint /usr/bin/test -v multinode-242095-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -d /var/lib
	I0307 18:16:51.595371  786188 oci.go:107] Successfully prepared a docker volume multinode-242095-m02
	I0307 18:16:51.595420  786188 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 18:16:51.595465  786188 kic.go:190] Starting extracting preloaded images to volume ...
	I0307 18:16:51.595533  786188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 18:16:56.644032  786188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242095-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 -I lz4 -xf /preloaded.tar -C /extractDir: (5.048448058s)
	I0307 18:16:56.644071  786188 kic.go:199] duration metric: took 5.048601 seconds to extract preloaded images to volume
	W0307 18:16:56.644239  786188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 18:16:56.644355  786188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 18:16:56.771459  786188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-242095-m02 --name multinode-242095-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242095-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-242095-m02 --network multinode-242095 --ip 192.168.58.3 --volume multinode-242095-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9
	I0307 18:16:57.240385  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Running}}
	I0307 18:16:57.313862  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:57.385444  786188 cli_runner.go:164] Run: docker exec multinode-242095-m02 stat /var/lib/dpkg/alternatives/iptables
	I0307 18:16:57.508588  786188 oci.go:144] the created container "multinode-242095-m02" has a running status.
	I0307 18:16:57.508623  786188 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa...
	I0307 18:16:57.977272  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0307 18:16:57.977331  786188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 18:16:58.083071  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:58.148879  786188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 18:16:58.148905  786188 kic_runner.go:114] Args: [docker exec --privileged multinode-242095-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 18:16:58.259844  786188 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:16:58.324538  786188 machine.go:88] provisioning docker machine ...
	I0307 18:16:58.324587  786188 ubuntu.go:169] provisioning hostname "multinode-242095-m02"
	I0307 18:16:58.324650  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.390990  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:58.391424  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:58.391438  786188 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-242095-m02 && echo "multinode-242095-m02" | sudo tee /etc/hostname
	I0307 18:16:58.515988  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-242095-m02
	
	I0307 18:16:58.516086  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.579662  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:58.580114  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:58.580133  786188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-242095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-242095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-242095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:16:58.690941  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:16:58.690975  786188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15985-636026/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-636026/.minikube}
	I0307 18:16:58.690997  786188 ubuntu.go:177] setting up certificates
	I0307 18:16:58.691009  786188 provision.go:83] configureAuth start
	I0307 18:16:58.691071  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:16:58.757503  786188 provision.go:138] copyHostCerts
	I0307 18:16:58.757549  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:58.757574  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem, removing ...
	I0307 18:16:58.757583  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem
	I0307 18:16:58.757641  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/ca.pem (1082 bytes)
	I0307 18:16:58.757714  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:58.757732  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem, removing ...
	I0307 18:16:58.757735  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem
	I0307 18:16:58.757757  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/cert.pem (1123 bytes)
	I0307 18:16:58.757811  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:58.757827  786188 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem, removing ...
	I0307 18:16:58.757833  786188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem
	I0307 18:16:58.757856  786188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-636026/.minikube/key.pem (1679 bytes)
	I0307 18:16:58.757912  786188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem org=jenkins.multinode-242095-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-242095-m02]
	I0307 18:16:58.846079  786188 provision.go:172] copyRemoteCerts
	I0307 18:16:58.846145  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:16:58.846191  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:58.908801  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:16:58.990464  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 18:16:58.990517  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 18:16:59.007658  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 18:16:59.007729  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 18:16:59.024463  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 18:16:59.024520  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:16:59.040858  786188 provision.go:86] duration metric: configureAuth took 349.838345ms
	I0307 18:16:59.040879  786188 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:16:59.041027  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:16:59.041071  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.103856  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.104272  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.104285  786188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 18:16:59.215528  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0307 18:16:59.215556  786188 ubuntu.go:71] root file system type: overlay
	I0307 18:16:59.215685  786188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 18:16:59.215772  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.281374  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.281811  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.281873  786188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 18:16:59.400263  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 18:16:59.400347  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:16:59.466849  786188 main.go:141] libmachine: Using SSH client type: native
	I0307 18:16:59.467335  786188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I0307 18:16:59.467355  786188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 18:17:00.101666  786188 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-07 18:16:59.392737492 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0307 18:17:00.101702  786188 machine.go:91] provisioned docker machine in 1.777133458s
	I0307 18:17:00.101712  786188 client.go:171] LocalClient.Create took 9.28316106s
	I0307 18:17:00.101725  786188 start.go:167] duration metric: libmachine.API.Create for "multinode-242095" took 9.283216458s
	I0307 18:17:00.101734  786188 start.go:300] post-start starting for "multinode-242095-m02" (driver="docker")
	I0307 18:17:00.101747  786188 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:17:00.101813  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:17:00.101861  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.167841  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.255549  786188 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:17:00.258236  786188 command_runner.go:130] > NAME="Ubuntu"
	I0307 18:17:00.258261  786188 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0307 18:17:00.258268  786188 command_runner.go:130] > ID=ubuntu
	I0307 18:17:00.258274  786188 command_runner.go:130] > ID_LIKE=debian
	I0307 18:17:00.258280  786188 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0307 18:17:00.258285  786188 command_runner.go:130] > VERSION_ID="20.04"
	I0307 18:17:00.258292  786188 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0307 18:17:00.258296  786188 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0307 18:17:00.258301  786188 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0307 18:17:00.258309  786188 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0307 18:17:00.258315  786188 command_runner.go:130] > VERSION_CODENAME=focal
	I0307 18:17:00.258322  786188 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0307 18:17:00.258388  786188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:17:00.258405  786188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:17:00.258416  786188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:17:00.258424  786188 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0307 18:17:00.258438  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/addons for local assets ...
	I0307 18:17:00.258493  786188 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-636026/.minikube/files for local assets ...
	I0307 18:17:00.258582  786188 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> 6427432.pem in /etc/ssl/certs
	I0307 18:17:00.258594  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /etc/ssl/certs/6427432.pem
	I0307 18:17:00.258696  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:17:00.265462  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:17:00.283079  786188 start.go:303] post-start completed in 181.327543ms
	I0307 18:17:00.283382  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:17:00.347727  786188 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/config.json ...
	I0307 18:17:00.347971  786188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:17:00.348012  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.412592  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.491458  786188 command_runner.go:130] > 17%!
	(MISSING)I0307 18:17:00.491740  786188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:17:00.495498  786188 command_runner.go:130] > 244G
	I0307 18:17:00.495529  786188 start.go:128] duration metric: createHost completed in 9.679540684s
	I0307 18:17:00.495538  786188 start.go:83] releasing machines lock for "multinode-242095-m02", held for 9.679655304s
	I0307 18:17:00.495609  786188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:17:00.561381  786188 out.go:177] * Found network options:
	I0307 18:17:00.563029  786188 out.go:177]   - NO_PROXY=192.168.58.2
	W0307 18:17:00.564555  786188 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 18:17:00.564594  786188 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 18:17:00.564670  786188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:17:00.564708  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.564745  786188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:17:00.564794  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:00.634283  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.634283  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:00.751205  786188 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 18:17:00.752530  786188 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0307 18:17:00.752555  786188 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0307 18:17:00.752564  786188 command_runner.go:130] > Device: e3h/227d	Inode: 2131168     Links: 1
	I0307 18:17:00.752577  786188 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:17:00.752589  786188 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:17:00.752598  786188 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0307 18:17:00.752609  786188 command_runner.go:130] > Change: 2023-03-07 18:01:36.367924495 +0000
	I0307 18:17:00.752617  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:00.752684  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:17:00.773083  786188 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:17:00.773155  786188 ssh_runner.go:195] Run: which cri-dockerd
	I0307 18:17:00.775894  786188 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 18:17:00.776012  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 18:17:00.783049  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 18:17:00.795569  786188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:17:00.810723  786188 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0307 18:17:00.810749  786188 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 18:17:00.810760  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:17:00.810790  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:17:00.810888  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:17:00.822212  786188 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:17:00.822233  786188 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 18:17:00.822924  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 18:17:00.830248  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:17:00.837462  786188 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:17:00.837502  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:17:00.844663  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:17:00.851989  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:17:00.859170  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:17:00.867008  786188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:17:00.873876  786188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:17:00.881482  786188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:17:00.888005  786188 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 18:17:00.888066  786188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:17:00.894114  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:00.975468  786188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:17:01.057347  786188 start.go:485] detecting cgroup driver to use...
	I0307 18:17:01.057405  786188 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:17:01.057456  786188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 18:17:01.067139  786188 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0307 18:17:01.067232  786188 command_runner.go:130] > [Unit]
	I0307 18:17:01.067254  786188 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 18:17:01.067263  786188 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 18:17:01.067274  786188 command_runner.go:130] > BindsTo=containerd.service
	I0307 18:17:01.067284  786188 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0307 18:17:01.067294  786188 command_runner.go:130] > Wants=network-online.target
	I0307 18:17:01.067306  786188 command_runner.go:130] > Requires=docker.socket
	I0307 18:17:01.067320  786188 command_runner.go:130] > StartLimitBurst=3
	I0307 18:17:01.067331  786188 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 18:17:01.067338  786188 command_runner.go:130] > [Service]
	I0307 18:17:01.067347  786188 command_runner.go:130] > Type=notify
	I0307 18:17:01.067353  786188 command_runner.go:130] > Restart=on-failure
	I0307 18:17:01.067371  786188 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0307 18:17:01.067397  786188 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 18:17:01.067415  786188 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 18:17:01.067430  786188 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 18:17:01.067465  786188 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 18:17:01.067479  786188 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 18:17:01.067488  786188 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 18:17:01.067502  786188 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 18:17:01.067518  786188 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 18:17:01.067532  786188 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 18:17:01.067537  786188 command_runner.go:130] > ExecStart=
	I0307 18:17:01.067560  786188 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0307 18:17:01.067570  786188 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 18:17:01.067580  786188 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 18:17:01.067593  786188 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 18:17:01.067603  786188 command_runner.go:130] > LimitNOFILE=infinity
	I0307 18:17:01.067609  786188 command_runner.go:130] > LimitNPROC=infinity
	I0307 18:17:01.067616  786188 command_runner.go:130] > LimitCORE=infinity
	I0307 18:17:01.067627  786188 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 18:17:01.067638  786188 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 18:17:01.067644  786188 command_runner.go:130] > TasksMax=infinity
	I0307 18:17:01.067653  786188 command_runner.go:130] > TimeoutStartSec=0
	I0307 18:17:01.067663  786188 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 18:17:01.067673  786188 command_runner.go:130] > Delegate=yes
	I0307 18:17:01.067687  786188 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 18:17:01.067697  786188 command_runner.go:130] > KillMode=process
	I0307 18:17:01.067704  786188 command_runner.go:130] > [Install]
	I0307 18:17:01.067713  786188 command_runner.go:130] > WantedBy=multi-user.target
	I0307 18:17:01.068134  786188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0307 18:17:01.068203  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:17:01.078269  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:17:01.091176  786188 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:17:01.091199  786188 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 18:17:01.092364  786188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 18:17:01.202848  786188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 18:17:01.296007  786188 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 18:17:01.296044  786188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 18:17:01.310648  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:01.386967  786188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 18:17:01.604834  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:17:01.683772  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0307 18:17:01.683854  786188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 18:17:01.760587  786188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 18:17:01.840236  786188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:17:01.912839  786188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 18:17:01.924383  786188 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 18:17:01.924454  786188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 18:17:01.927415  786188 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 18:17:01.927437  786188 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 18:17:01.927471  786188 command_runner.go:130] > Device: ech/236d	Inode: 206         Links: 1
	I0307 18:17:01.927484  786188 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0307 18:17:01.927494  786188 command_runner.go:130] > Access: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927502  786188 command_runner.go:130] > Modify: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927511  786188 command_runner.go:130] > Change: 2023-03-07 18:17:01.916991314 +0000
	I0307 18:17:01.927515  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:01.927536  786188 start.go:553] Will wait 60s for crictl version
	I0307 18:17:01.927573  786188 ssh_runner.go:195] Run: which crictl
	I0307 18:17:01.929994  786188 command_runner.go:130] > /usr/bin/crictl
	I0307 18:17:01.930118  786188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:17:02.009002  786188 command_runner.go:130] > Version:  0.1.0
	I0307 18:17:02.009021  786188 command_runner.go:130] > RuntimeName:  docker
	I0307 18:17:02.009025  786188 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0307 18:17:02.009030  786188 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 18:17:02.009047  786188 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0307 18:17:02.009095  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:17:02.030888  786188 command_runner.go:130] > 23.0.1
	I0307 18:17:02.032020  786188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 18:17:02.054247  786188 command_runner.go:130] > 23.0.1
	I0307 18:17:02.059136  786188 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0307 18:17:02.060696  786188 out.go:177]   - env NO_PROXY=192.168.58.2
	I0307 18:17:02.062107  786188 cli_runner.go:164] Run: docker network inspect multinode-242095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:17:02.127576  786188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0307 18:17:02.130877  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:17:02.140481  786188 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095 for IP: 192.168.58.3
	I0307 18:17:02.140516  786188 certs.go:186] acquiring lock for shared ca certs: {Name:mk6aa9dfc4b93dc10fe6d5a07411d8b3adb46804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:17:02.140670  786188 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key
	I0307 18:17:02.140727  786188 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key
	I0307 18:17:02.140744  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 18:17:02.140761  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 18:17:02.140779  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 18:17:02.140796  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 18:17:02.140862  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem (1338 bytes)
	W0307 18:17:02.140908  786188 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743_empty.pem, impossibly tiny 0 bytes
	I0307 18:17:02.140922  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:17:02.140959  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/ca.pem (1082 bytes)
	I0307 18:17:02.140985  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:17:02.141009  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/home/jenkins/minikube-integration/15985-636026/.minikube/certs/key.pem (1679 bytes)
	I0307 18:17:02.141050  786188 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem (1708 bytes)
	I0307 18:17:02.141076  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem -> /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.141089  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem -> /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.141101  786188 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.141418  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:17:02.158494  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 18:17:02.175287  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:17:02.191822  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:17:02.208488  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/certs/642743.pem --> /usr/share/ca-certificates/642743.pem (1338 bytes)
	I0307 18:17:02.225034  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/ssl/certs/6427432.pem --> /usr/share/ca-certificates/6427432.pem (1708 bytes)
	I0307 18:17:02.241535  786188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:17:02.257607  786188 ssh_runner.go:195] Run: openssl version
	I0307 18:17:02.262425  786188 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0307 18:17:02.262532  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/642743.pem && ln -fs /usr/share/ca-certificates/642743.pem /etc/ssl/certs/642743.pem"
	I0307 18:17:02.269653  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272528  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272569  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:05 /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.272616  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/642743.pem
	I0307 18:17:02.277140  786188 command_runner.go:130] > 51391683
	I0307 18:17:02.277325  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/642743.pem /etc/ssl/certs/51391683.0"
	I0307 18:17:02.284243  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6427432.pem && ln -fs /usr/share/ca-certificates/6427432.pem /etc/ssl/certs/6427432.pem"
	I0307 18:17:02.291082  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293812  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293909  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:05 /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.293944  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6427432.pem
	I0307 18:17:02.298398  786188 command_runner.go:130] > 3ec20f2e
	I0307 18:17:02.298454  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6427432.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:17:02.305292  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:17:02.312208  786188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.314998  786188 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.315049  786188 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.315089  786188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:17:02.319291  786188 command_runner.go:130] > b5213941
	I0307 18:17:02.319466  786188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:17:02.326187  786188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 18:17:02.347919  786188 command_runner.go:130] > cgroupfs
	I0307 18:17:02.349252  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:17:02.349268  786188 cni.go:136] 2 nodes found, recommending kindnet
	I0307 18:17:02.349278  786188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:17:02.349300  786188 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-242095 NodeName:multinode-242095-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:17:02.349416  786188 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-242095-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:17:02.349473  786188 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-242095-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:17:02.349518  786188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 18:17:02.356623  786188 command_runner.go:130] > kubeadm
	I0307 18:17:02.356639  786188 command_runner.go:130] > kubectl
	I0307 18:17:02.356646  786188 command_runner.go:130] > kubelet
	I0307 18:17:02.356668  786188 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:17:02.356715  786188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0307 18:17:02.363580  786188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0307 18:17:02.375684  786188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:17:02.387570  786188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:17:02.390230  786188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:17:02.399218  786188 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:17:02.399461  786188 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:17:02.399461  786188 start.go:301] JoinCluster: &{Name:multinode-242095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-242095 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:17:02.399544  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 18:17:02.399593  786188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:17:02.463017  786188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:17:02.603231  786188 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb 
	I0307 18:17:02.603307  786188 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:17:02.603344  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-242095-m02"
	I0307 18:17:02.640308  786188 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 18:17:02.666566  786188 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0307 18:17:02.666590  786188 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0307 18:17:02.666598  786188 command_runner.go:130] > OS: Linux
	I0307 18:17:02.666605  786188 command_runner.go:130] > CGROUPS_CPU: enabled
	I0307 18:17:02.666613  786188 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0307 18:17:02.666621  786188 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0307 18:17:02.666628  786188 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0307 18:17:02.666636  786188 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0307 18:17:02.666644  786188 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0307 18:17:02.666661  786188 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0307 18:17:02.666673  786188 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0307 18:17:02.666684  786188 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0307 18:17:02.747472  786188 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0307 18:17:02.747511  786188 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0307 18:17:02.774397  786188 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:17:02.774430  786188 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:17:02.774436  786188 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 18:17:02.863585  786188 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0307 18:17:04.380510  786188 command_runner.go:130] > This node has joined the cluster:
	I0307 18:17:04.380541  786188 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0307 18:17:04.380550  786188 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0307 18:17:04.380560  786188 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0307 18:17:04.383031  786188 command_runner.go:130] ! W0307 18:17:02.639882    1339 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 18:17:04.383072  786188 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0307 18:17:04.383085  786188 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:17:04.383108  786188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n4scxe.np4ruqei2g7m4axa --discovery-token-ca-cert-hash sha256:19489d607321881efd3d3f8731823aced8f7d16230c2945a2829672e5b6115bb --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-242095-m02": (1.779744201s)
	I0307 18:17:04.383133  786188 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 18:17:04.553919  786188 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0307 18:17:04.553966  786188 start.go:303] JoinCluster complete in 2.154521785s
	I0307 18:17:04.553980  786188 cni.go:84] Creating CNI manager for ""
	I0307 18:17:04.553987  786188 cni.go:136] 2 nodes found, recommending kindnet
	I0307 18:17:04.554030  786188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 18:17:04.557461  786188 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 18:17:04.557489  786188 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0307 18:17:04.557499  786188 command_runner.go:130] > Device: 36h/54d	Inode: 2129263     Links: 1
	I0307 18:17:04.557510  786188 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 18:17:04.557519  786188 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:17:04.557527  786188 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0307 18:17:04.557534  786188 command_runner.go:130] > Change: 2023-03-07 18:01:35.631850484 +0000
	I0307 18:17:04.557541  786188 command_runner.go:130] >  Birth: -
	I0307 18:17:04.557588  786188 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 18:17:04.557599  786188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 18:17:04.570601  786188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 18:17:04.729915  786188 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 18:17:04.733114  786188 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 18:17:04.735252  786188 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 18:17:04.747502  786188 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 18:17:04.751561  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:17:04.751889  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:17:04.752290  786188 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 18:17:04.752304  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.752315  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.752322  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.754283  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.754302  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.754310  786188 round_trippers.go:580]     Content-Length: 291
	I0307 18:17:04.754315  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.754321  786188 round_trippers.go:580]     Audit-Id: 6bda0436-6cff-4b57-bee7-a8697c5bbc6c
	I0307 18:17:04.754327  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.754337  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.754343  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.754350  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.754373  786188 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e9c3a279-9625-4694-bc3b-1ec27608a577","resourceVersion":"420","creationTimestamp":"2023-03-07T18:16:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 18:17:04.754463  786188 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-242095" context rescaled to 1 replicas
	I0307 18:17:04.754489  786188 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 18:17:04.757797  786188 out.go:177] * Verifying Kubernetes components...
	I0307 18:17:04.759265  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:04.769244  786188 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:17:04.769474  786188 kapi.go:59] client config for multinode-242095: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/profiles/multinode-242095/client.key", CAFile:"/home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:17:04.769741  786188 node_ready.go:35] waiting up to 6m0s for node "multinode-242095-m02" to be "Ready" ...
	I0307 18:17:04.769803  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:04.769810  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.769820  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.769828  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.771655  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.771672  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.771679  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.771685  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.771691  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.771697  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.771702  786188 round_trippers.go:580]     Audit-Id: 5ab08e79-ef26-4515-968b-fc3732264a78
	I0307 18:17:04.771708  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.771855  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:04.772156  786188 node_ready.go:49] node "multinode-242095-m02" has status "Ready":"True"
	I0307 18:17:04.772188  786188 node_ready.go:38] duration metric: took 2.413192ms waiting for node "multinode-242095-m02" to be "Ready" ...
	I0307 18:17:04.772196  786188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:17:04.772247  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0307 18:17:04.772254  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.772261  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.772267  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.774872  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:04.774890  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.774898  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.774904  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.774910  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.774916  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.774923  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.774936  786188 round_trippers.go:580]     Audit-Id: d7d62a2a-e7fe-4ac9-ae28-42cc2fa44afa
	I0307 18:17:04.775429  786188 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"466"},"items":[{"metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65879 chars]
	I0307 18:17:04.777438  786188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.777500  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fsll9
	I0307 18:17:04.777508  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.777515  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.777521  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.779153  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.779173  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.779183  786188 round_trippers.go:580]     Audit-Id: b81b45b2-f52a-42fc-83b1-81f75a10d239
	I0307 18:17:04.779192  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.779201  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.779209  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.779222  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.779231  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.779341  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-fsll9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"17db7207-f2ce-4566-85fc-dc7e0eb65d09","resourceVersion":"416","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d20c3d79-294d-4e54-9c67-6ca556a54259","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d20c3d79-294d-4e54-9c67-6ca556a54259\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I0307 18:17:04.779878  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.779893  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.779905  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.779915  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.781445  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.781464  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.781474  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.781483  786188 round_trippers.go:580]     Audit-Id: 36d0bce5-eb66-4170-930c-123103ba6647
	I0307 18:17:04.781492  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.781501  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.781511  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.781527  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.781653  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.781955  786188 pod_ready.go:92] pod "coredns-787d4945fb-fsll9" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.781966  786188 pod_ready.go:81] duration metric: took 4.5092ms waiting for pod "coredns-787d4945fb-fsll9" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.781979  786188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.782030  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-242095
	I0307 18:17:04.782038  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.782045  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.782052  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.783513  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.783532  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.783542  786188 round_trippers.go:580]     Audit-Id: 75064e64-ad99-4f83-b506-f539966c9e5c
	I0307 18:17:04.783552  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.783562  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.783576  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.783589  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.783601  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.783684  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-242095","namespace":"kube-system","uid":"58a90a44-38a6-4150-b6a5-d68e1257f6f3","resourceVersion":"286","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.mirror":"3b6a03326c0b3775a14cb932fc6cec2b","kubernetes.io/config.seen":"2023-03-07T18:16:19.703879850Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0307 18:17:04.784037  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.784048  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.784054  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.784062  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.785436  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.785452  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.785459  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.785464  786188 round_trippers.go:580]     Audit-Id: 0e99177c-b970-4d49-873e-746198f343af
	I0307 18:17:04.785470  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.785476  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.785482  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.785490  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.785599  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.785906  786188 pod_ready.go:92] pod "etcd-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.785920  786188 pod_ready.go:81] duration metric: took 3.927043ms waiting for pod "etcd-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.785939  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.785988  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-242095
	I0307 18:17:04.785998  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.786009  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.786019  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.787572  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.787594  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.787605  786188 round_trippers.go:580]     Audit-Id: 327534ab-e94f-4189-8db5-10e32565ea6e
	I0307 18:17:04.787613  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.787619  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.787625  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.787634  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.787639  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.787745  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-242095","namespace":"kube-system","uid":"17d64e05-257c-45b2-bec2-6b363cbfb788","resourceVersion":"293","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.mirror":"e6407eb55a1944937cba3e31bce696d3","kubernetes.io/config.seen":"2023-03-07T18:16:19.703896620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0307 18:17:04.788224  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.788238  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.788249  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.788259  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.789739  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.789754  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.789764  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.789773  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.789786  786188 round_trippers.go:580]     Audit-Id: bcc7fb07-7b23-461a-af74-993a7168158d
	I0307 18:17:04.789798  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.789812  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.789822  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.789918  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.790302  786188 pod_ready.go:92] pod "kube-apiserver-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.790321  786188 pod_ready.go:81] duration metric: took 4.368106ms waiting for pod "kube-apiserver-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.790337  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.790388  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-242095
	I0307 18:17:04.790399  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.790410  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.790424  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.791979  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.791996  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.792003  786188 round_trippers.go:580]     Audit-Id: 646a942d-c99c-4307-99f0-ab5d2cee3ee0
	I0307 18:17:04.792009  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.792014  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.792019  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.792024  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.792030  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.792162  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-242095","namespace":"kube-system","uid":"536246ee-9384-411a-bd3a-a3f3862a51bc","resourceVersion":"291","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.mirror":"3735a30cba015d9a6e313a87fb4f42e5","kubernetes.io/config.seen":"2023-03-07T18:16:19.703897932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0307 18:17:04.792511  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:04.792522  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.792529  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.792535  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.793926  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:04.793945  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.793952  786188 round_trippers.go:580]     Audit-Id: 76ba7c60-2007-4db3-924f-d34917cf3d9f
	I0307 18:17:04.793958  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.793965  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.793974  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.793994  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.794003  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.794086  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:04.794440  786188 pod_ready.go:92] pod "kube-controller-manager-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:04.794452  786188 pod_ready.go:81] duration metric: took 4.10398ms waiting for pod "kube-controller-manager-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.794463  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:04.970679  786188 request.go:622] Waited for 176.114787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:17:04.970735  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rjsmj
	I0307 18:17:04.970740  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:04.970747  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:04.970754  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:04.972924  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:04.972947  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:04.972955  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:04.972962  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:04.972972  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:04.972985  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:04 GMT
	I0307 18:17:04.973000  786188 round_trippers.go:580]     Audit-Id: 1f5efaf3-dd1e-4e2f-9e8f-24a78d79c420
	I0307 18:17:04.973010  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:04.973133  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rjsmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"c20d9dc5-69a3-46f9-bdd7-7a54def58eac","resourceVersion":"382","creationTimestamp":"2023-03-07T18:16:32Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0307 18:17:05.169945  786188 request.go:622] Waited for 196.28407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:05.170014  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:05.170022  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.170032  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.170044  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.171957  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:05.171987  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.171995  786188 round_trippers.go:580]     Audit-Id: d2d078f8-df73-4307-8af3-3e6e92376332
	I0307 18:17:05.172001  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.172007  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.172012  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.172032  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.172041  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.172149  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:05.172476  786188 pod_ready.go:92] pod "kube-proxy-rjsmj" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:05.172488  786188 pod_ready.go:81] duration metric: took 378.016795ms waiting for pod "kube-proxy-rjsmj" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:05.172515  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbx65" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:05.369898  786188 request.go:622] Waited for 197.284834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:05.369956  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:05.369961  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.369969  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.369975  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.371962  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:05.371988  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.371999  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.372008  786188 round_trippers.go:580]     Audit-Id: 33371919-c646-4a6b-a5a1-5ae951e16acd
	I0307 18:17:05.372020  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.372028  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.372040  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.372052  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.372132  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"451","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0307 18:17:05.570854  786188 request.go:622] Waited for 198.337189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:05.570923  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:05.570930  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:05.570938  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:05.570945  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:05.573112  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:05.573143  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:05.573154  786188 round_trippers.go:580]     Audit-Id: e4fc431e-fada-45a8-8a7e-28cdfe70ed72
	I0307 18:17:05.573163  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:05.573173  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:05.573181  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:05.573189  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:05.573195  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:05 GMT
	I0307 18:17:05.573303  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:06.074405  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:06.074431  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.074443  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.074451  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.076488  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:06.076513  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.076524  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.076534  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.076543  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.076552  786188 round_trippers.go:580]     Audit-Id: 41d104a9-c653-4be1-bf45-49fb0bf4596b
	I0307 18:17:06.076562  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.076575  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.076821  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"451","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0307 18:17:06.077315  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:06.077333  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.077345  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.077356  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.079034  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.079053  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.079062  786188 round_trippers.go:580]     Audit-Id: 816b0a92-7699-4b39-be91-cc4cec73a097
	I0307 18:17:06.079070  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.079078  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.079087  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.079096  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.079110  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.079184  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:06.573821  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:06.573842  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.573850  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.573856  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.575845  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.575870  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.575881  786188 round_trippers.go:580]     Audit-Id: 5db198d7-9c17-4e61-8505-50063c17e0bd
	I0307 18:17:06.575890  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.575897  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.575904  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.575917  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.575931  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.576038  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"472","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0307 18:17:06.576480  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:06.576493  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:06.576503  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:06.576511  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:06.578044  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:06.578070  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:06.578081  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:06.578090  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:06.578102  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:06 GMT
	I0307 18:17:06.578116  786188 round_trippers.go:580]     Audit-Id: a6bad01d-adb7-4164-9afd-5324a0b8ee59
	I0307 18:17:06.578129  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:06.578142  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:06.578238  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:07.074114  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbx65
	I0307 18:17:07.074136  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.074144  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.074150  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.076312  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:07.076336  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.076348  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.076356  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.076365  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.076377  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.076395  786188 round_trippers.go:580]     Audit-Id: 019ed11c-1f24-4e21-99b5-211218cc820e
	I0307 18:17:07.076407  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.076534  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbx65","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef56bc75-ad20-41de-b282-ea3b5c6d458b","resourceVersion":"475","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7354289a-2bc5-4fb3-abaa-60b560638ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7354289a-2bc5-4fb3-abaa-60b560638ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0307 18:17:07.076988  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095-m02
	I0307 18:17:07.077001  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.077008  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.077017  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.078515  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.078536  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.078547  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.078556  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.078565  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.078578  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.078590  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.078598  786188 round_trippers.go:580]     Audit-Id: e9704b82-a5c0-42b8-9ce3-08ed6df69e9d
	I0307 18:17:07.078666  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095-m02","uid":"c7720131-a075-463c-8e49-0b14ef1f5ff1","resourceVersion":"466","creationTimestamp":"2023-03-07T18:17:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:17:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0307 18:17:07.078950  786188 pod_ready.go:92] pod "kube-proxy-tbx65" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:07.078968  786188 pod_ready.go:81] duration metric: took 1.906438577s waiting for pod "kube-proxy-tbx65" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.078979  786188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.079088  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-242095
	I0307 18:17:07.079102  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.079109  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.079118  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.080680  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.080698  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.080708  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.080717  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.080726  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.080735  786188 round_trippers.go:580]     Audit-Id: 0e0b38f8-11a8-4ac1-ace6-6a36e85be0a3
	I0307 18:17:07.080748  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.080758  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.080847  786188 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-242095","namespace":"kube-system","uid":"bd31dd93-d9b4-4f7a-9d31-d15d68702789","resourceVersion":"282","creationTimestamp":"2023-03-07T18:16:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.mirror":"a9799497b54147e7f005bd47084fe394","kubernetes.io/config.seen":"2023-03-07T18:16:19.703898726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:16:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0307 18:17:07.170394  786188 request.go:622] Waited for 89.236345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:07.170448  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-242095
	I0307 18:17:07.170454  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.170463  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.170469  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.172241  786188 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 18:17:07.172264  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.172274  786188 round_trippers.go:580]     Audit-Id: 2416e965-e2a1-4f39-a1b8-08d51b3036c9
	I0307 18:17:07.172282  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.172290  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.172300  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.172316  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.172324  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.172432  786188 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-07T18:16:16Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0307 18:17:07.172845  786188 pod_ready.go:92] pod "kube-scheduler-multinode-242095" in "kube-system" namespace has status "Ready":"True"
	I0307 18:17:07.172860  786188 pod_ready.go:81] duration metric: took 93.86566ms waiting for pod "kube-scheduler-multinode-242095" in "kube-system" namespace to be "Ready" ...
	I0307 18:17:07.172873  786188 pod_ready.go:38] duration metric: took 2.40066872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:17:07.172901  786188 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 18:17:07.172955  786188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:07.183664  786188 system_svc.go:56] duration metric: took 10.753644ms WaitForService to wait for kubelet.
	I0307 18:17:07.183694  786188 kubeadm.go:578] duration metric: took 2.429179914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 18:17:07.183721  786188 node_conditions.go:102] verifying NodePressure condition ...
	I0307 18:17:07.370009  786188 request.go:622] Waited for 186.206724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0307 18:17:07.370061  786188 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0307 18:17:07.370067  786188 round_trippers.go:469] Request Headers:
	I0307 18:17:07.370077  786188 round_trippers.go:473]     Accept: application/json, */*
	I0307 18:17:07.370093  786188 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0307 18:17:07.372224  786188 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 18:17:07.372245  786188 round_trippers.go:577] Response Headers:
	I0307 18:17:07.372252  786188 round_trippers.go:580]     Audit-Id: 8fe4bee5-321b-4155-8194-97f7514122c6
	I0307 18:17:07.372258  786188 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 18:17:07.372267  786188 round_trippers.go:580]     Content-Type: application/json
	I0307 18:17:07.372280  786188 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 101f2d7e-028b-443a-80b1-e51324d1c168
	I0307 18:17:07.372294  786188 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 057ada6b-62c7-4941-b6d2-b69e374b0c5c
	I0307 18:17:07.372303  786188 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:17:07 GMT
	I0307 18:17:07.372511  786188 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"multinode-242095","uid":"752708d7-4045-4527-8fc5-090e8e7161bc","resourceVersion":"422","creationTimestamp":"2023-03-07T18:16:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-242095","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-242095","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T18_16_20_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0307 18:17:07.373014  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:17:07.373029  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:17:07.373039  786188 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0307 18:17:07.373048  786188 node_conditions.go:123] node cpu capacity is 8
	I0307 18:17:07.373056  786188 node_conditions.go:105] duration metric: took 189.325303ms to run NodePressure ...
	I0307 18:17:07.373071  786188 start.go:228] waiting for startup goroutines ...
	I0307 18:17:07.373097  786188 start.go:242] writing updated cluster config ...
	I0307 18:17:07.373373  786188 ssh_runner.go:195] Run: rm -f paused
	I0307 18:17:07.438086  786188 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
	I0307 18:17:07.441217  786188 out.go:177] * Done! kubectl is now configured to use "multinode-242095" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-03-07 18:16:02 UTC, end at Tue 2023-03-07 18:17:16 UTC. --
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.734632497Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735312424Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735331053Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735355546Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735365363Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735395423Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735418868Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735470681Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735504291Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735551786Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735565628Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735802269Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.735837685Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.736250182Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.747143391Z" level=info msg="Loading containers: start."
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.824704209Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.857862864Z" level=info msg="Loading containers: done."
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.866502667Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.866547440Z" level=info msg="Daemon has completed initialization"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.879122453Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 07 18:16:05 multinode-242095 systemd[1]: Started Docker Application Container Engine.
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.886202533Z" level=info msg="API listen on [::]:2376"
	Mar 07 18:16:05 multinode-242095 dockerd[942]: time="2023-03-07T18:16:05.890155282Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 07 18:16:48 multinode-242095 dockerd[942]: time="2023-03-07T18:16:48.613799363Z" level=info msg="ignoring event" container=0cd1efd5f84995fe9a1cd5f12f40cab5d19af6fc5bcc048cf5593a0364d25282 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 18:16:48 multinode-242095 dockerd[942]: time="2023-03-07T18:16:48.681515987Z" level=info msg="ignoring event" container=a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	2d38b72e17f8d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 seconds ago        Running             busybox                   0                   3b793adae6b73
	ca75755eaea14       5185b96f0becf                                                                                         28 seconds ago       Running             coredns                   1                   9ce64715198e5
	282af577ac38b       kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07              41 seconds ago       Running             kindnet-cni               0                   295ceb8066cdb
	0cd1efd5f8499       5185b96f0becf                                                                                         41 seconds ago       Exited              coredns                   0                   a3f26daf9a048
	639eded705175       6e38f40d628db                                                                                         42 seconds ago       Running             storage-provisioner       0                   26360294bb1ca
	bd0a44cc6e392       6f64e7135a6ec                                                                                         43 seconds ago       Running             kube-proxy                0                   0819928f73228
	02c3cc1dc6e48       db8f409d9a5d7                                                                                         About a minute ago   Running             kube-scheduler            0                   66ddf08048c9a
	0ecca898654fc       240e201d5b0d8                                                                                         About a minute ago   Running             kube-controller-manager   0                   d747de4f228aa
	3a83d434102f0       63d3239c3c159                                                                                         About a minute ago   Running             kube-apiserver            0                   acdde955f0646
	acdfade9da182       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   90379e88a49cf
	
	* 
	* ==> coredns [0cd1efd5f849] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:45668 - 59500 "HINFO IN 5563505768494837254.4717279030372536171. udp 57 false 512" - - 0 5.000140106s
	[ERROR] plugin/errors: 2 5563505768494837254.4717279030372536171. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50721 - 9079 "HINFO IN 5563505768494837254.4717279030372536171. udp 57 false 512" - - 0 5.000078385s
	[ERROR] plugin/errors: 2 5563505768494837254.4717279030372536171. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [ca75755eaea1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52841 - 53274 "HINFO IN 6905225731476761590.1859547329355607286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00997676s
	[INFO] 10.244.0.3:58612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190161s
	[INFO] 10.244.0.3:37638 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.019184904s
	[INFO] 10.244.0.3:34309 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000616292s
	[INFO] 10.244.0.3:59850 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009421284s
	[INFO] 10.244.0.3:36390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128365s
	[INFO] 10.244.0.3:45601 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004941893s
	[INFO] 10.244.0.3:53754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145115s
	[INFO] 10.244.0.3:44417 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154297s
	[INFO] 10.244.0.3:45670 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011981764s
	[INFO] 10.244.0.3:35547 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157822s
	[INFO] 10.244.0.3:51913 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137317s
	[INFO] 10.244.0.3:46444 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093313s
	[INFO] 10.244.0.3:56821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159548s
	[INFO] 10.244.0.3:49995 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010494s
	[INFO] 10.244.0.3:49513 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097357s
	[INFO] 10.244.0.3:49797 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098899s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-242095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-242095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
	                    minikube.k8s.io/name=multinode-242095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_07T18_16_20_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-242095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:16:50 +0000   Tue, 07 Mar 2023 18:16:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-242095
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d9a686e4a545abb7cfdc7dc7b2947f
	  System UUID:                8846a9c2-9acf-44c6-8c4e-298bf897e420
	  Boot ID:                    f01f161d-486d-4652-b75e-ddd4310bc409
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-rfr2n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-787d4945fb-fsll9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     44s
	  kube-system                 etcd-multinode-242095                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kindnet-4sm84                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-apiserver-multinode-242095             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-multinode-242095    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-proxy-rjsmj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-multinode-242095             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 42s   kube-proxy       
	  Normal  Starting                 57s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s   kubelet          Node multinode-242095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s   kubelet          Node multinode-242095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s   kubelet          Node multinode-242095 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             57s   kubelet          Node multinode-242095 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  56s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                46s   kubelet          Node multinode-242095 status is now: NodeReady
	  Normal  RegisteredNode           44s   node-controller  Node multinode-242095 event: Registered Node multinode-242095 in Controller
	
	
	Name:               multinode-242095-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-242095-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-242095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:17:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:17:04 +0000   Tue, 07 Mar 2023 18:17:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-242095-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871744Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d9a686e4a545abb7cfdc7dc7b2947f
	  System UUID:                3fd908b3-2170-488e-8e30-9fff994820a6
	  Boot ID:                    f01f161d-486d-4652-b75e-ddd4310bc409
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-jvgsd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-j52z6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13s
	  kube-system                 kube-proxy-tbx65            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node multinode-242095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node multinode-242095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node multinode-242095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12s                kubelet          Node multinode-242095-m02 status is now: NodeReady
	  Normal  RegisteredNode           9s                 node-controller  Node multinode-242095-m02 event: Registered Node multinode-242095-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.006304] FS-Cache: N-cookie c=0000001c [p=00000012 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=00000000aa45e9c8
	[  +0.008750] FS-Cache: N-key=[8] 'c8a20f0200000000'
	[  +4.456100] FS-Cache: Duplicate cookie detected
	[  +0.004739] FS-Cache: O-cookie c=00000015 [p=00000012 fl=226 nc=0 na=1]
	[  +0.006752] FS-Cache: O-cookie d=000000003c89735b{9p.inode} n=00000000a40be2fd
	[  +0.007366] FS-Cache: O-key=[8] 'c7a20f0200000000'
	[  +0.004962] FS-Cache: N-cookie c=0000001e [p=00000012 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=0000000081b67dfd
	[  +0.008746] FS-Cache: N-key=[8] 'c7a20f0200000000'
	[  +0.610772] FS-Cache: Duplicate cookie detected
	[  +0.004706] FS-Cache: O-cookie c=00000018 [p=00000012 fl=226 nc=0 na=1]
	[  +0.006763] FS-Cache: O-cookie d=000000003c89735b{9p.inode} n=0000000085ae5241
	[  +0.007353] FS-Cache: O-key=[8] 'cca20f0200000000'
	[  +0.004916] FS-Cache: N-cookie c=0000001f [p=00000012 fl=2 nc=0 na=1]
	[  +0.006600] FS-Cache: N-cookie d=000000003c89735b{9p.inode} n=00000000cfef0339
	[  +0.008725] FS-Cache: N-key=[8] 'cca20f0200000000'
	[ +11.432134] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 54 93 73 69 4f 08 06
	[Mar 7 18:11] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 57 14 d9 a5 58 08 06
	[  +0.191019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a be 16 89 59 4e 08 06
	[Mar 7 18:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 85 8e 3d d0 47 08 06
	
	* 
	* ==> etcd [acdfade9da18] <==
	* {"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-07T18:16:14.118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-03-07T18:16:14.119Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-03-07T18:16:15.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-07T18:16:15.111Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-242095 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-07T18:16:15.112Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.113Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:16:15.114Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-07T18:16:15.114Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-03-07T18:16:55.122Z","caller":"traceutil/trace.go:171","msg":"trace[1470268138] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"149.242016ms","start":"2023-03-07T18:16:54.973Z","end":"2023-03-07T18:16:55.122Z","steps":["trace[1470268138] 'process raft request'  (duration: 149.099672ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:17:16 up  1:59,  0 users,  load average: 2.47, 2.34, 2.27
	Linux multinode-242095 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [282af577ac38] <==
	* I0307 18:16:35.893479       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0307 18:16:35.893519       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0307 18:16:35.893646       1 main.go:116] setting mtu 1500 for CNI 
	I0307 18:16:35.893668       1 main.go:146] kindnetd IP family: "ipv4"
	I0307 18:16:35.893680       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0307 18:16:36.195264       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:36.195290       1 main.go:227] handling current node
	I0307 18:16:46.306691       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:46.306722       1 main.go:227] handling current node
	I0307 18:16:56.319046       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:16:56.319077       1 main.go:227] handling current node
	I0307 18:17:06.330915       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:17:06.330940       1 main.go:227] handling current node
	I0307 18:17:06.330950       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0307 18:17:06.330955       1 main.go:250] Node multinode-242095-m02 has CIDR [10.244.1.0/24] 
	I0307 18:17:06.331131       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0307 18:17:16.343035       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0307 18:17:16.343064       1 main.go:227] handling current node
	I0307 18:17:16.343077       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0307 18:17:16.343083       1 main.go:250] Node multinode-242095-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [3a83d434102f] <==
	* I0307 18:16:16.804470       1 controller.go:615] quota admission added evaluator for: namespaces
	E0307 18:16:16.810325       1 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: namespaces "kube-system" not found
	I0307 18:16:16.893406       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 18:16:16.893417       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 18:16:16.893765       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0307 18:16:16.893803       1 shared_informer.go:280] Caches are synced for configmaps
	I0307 18:16:16.894107       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0307 18:16:16.894136       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0307 18:16:17.012372       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 18:16:17.486290       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 18:16:17.697596       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0307 18:16:17.701154       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0307 18:16:17.701176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 18:16:18.060179       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 18:16:18.090497       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 18:16:18.150330       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0307 18:16:18.155383       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0307 18:16:18.156323       1 controller.go:615] quota admission added evaluator for: endpoints
	I0307 18:16:18.159820       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 18:16:18.716049       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0307 18:16:19.601328       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0307 18:16:19.609959       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0307 18:16:19.617679       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0307 18:16:32.704779       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0307 18:16:32.886267       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0ecca898654f] <==
	* I0307 18:16:32.897860       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rjsmj"
	I0307 18:16:32.914948       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0307 18:16:32.915841       1 shared_informer.go:280] Caches are synced for taint
	I0307 18:16:32.915928       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0307 18:16:32.915955       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	I0307 18:16:32.916011       1 taint_manager.go:211] "Sending events to api server"
	W0307 18:16:32.916029       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-242095. Assuming now as a timestamp.
	I0307 18:16:32.916078       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0307 18:16:32.916346       1 event.go:294] "Event occurred" object="multinode-242095" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-242095 event: Registered Node multinode-242095 in Controller"
	I0307 18:16:32.919053       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0307 18:16:33.036088       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0307 18:16:33.046915       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-nghc4"
	I0307 18:16:33.301733       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:16:33.391572       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:16:33.391604       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0307 18:17:03.875961       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-242095-m02" does not exist
	I0307 18:17:03.882618       1 range_allocator.go:372] Set node multinode-242095-m02 PodCIDR to [10.244.1.0/24]
	I0307 18:17:03.885476       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tbx65"
	I0307 18:17:03.885499       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j52z6"
	W0307 18:17:04.491974       1 topologycache.go:232] Can't get CPU or zone information for multinode-242095-m02 node
	W0307 18:17:07.920885       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-242095-m02. Assuming now as a timestamp.
	I0307 18:17:07.920926       1 event.go:294] "Event occurred" object="multinode-242095-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-242095-m02 event: Registered Node multinode-242095-m02 in Controller"
	I0307 18:17:08.478780       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0307 18:17:08.486660       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-jvgsd"
	I0307 18:17:08.490112       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-rfr2n"
	
	* 
	* ==> kube-proxy [bd0a44cc6e39] <==
	* I0307 18:16:34.102314       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0307 18:16:34.102399       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0307 18:16:34.102423       1 server_others.go:535] "Using iptables proxy"
	I0307 18:16:34.122840       1 server_others.go:176] "Using iptables Proxier"
	I0307 18:16:34.122873       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0307 18:16:34.122881       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0307 18:16:34.122895       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0307 18:16:34.122922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 18:16:34.123223       1 server.go:655] "Version info" version="v1.26.2"
	I0307 18:16:34.123235       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:16:34.123735       1 config.go:317] "Starting service config controller"
	I0307 18:16:34.123758       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0307 18:16:34.123906       1 config.go:226] "Starting endpoint slice config controller"
	I0307 18:16:34.123935       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0307 18:16:34.124174       1 config.go:444] "Starting node config controller"
	I0307 18:16:34.124191       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0307 18:16:34.223848       1 shared_informer.go:280] Caches are synced for service config
	I0307 18:16:34.224890       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0307 18:16:34.224912       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [02c3cc1dc6e4] <==
	* W0307 18:16:16.808743       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:16:16.809716       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:16:16.808828       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:16.809814       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:16.808838       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:16:16.809912       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 18:16:16.808889       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:16:16.809986       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:16:16.808958       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:16.810078       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:16.809019       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 18:16:16.810153       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 18:16:16.809385       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:16:16.810219       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 18:16:17.709841       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:16:17.709869       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 18:16:17.783000       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:16:17.783040       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:16:17.827759       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:17.827853       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:17.859049       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:16:17.859098       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:16:17.942212       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:16:17.942272       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0307 18:16:18.304019       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-03-07 18:16:02 UTC, end at Tue 2023-03-07 18:17:16 UTC. --
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991671    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-kube-proxy\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991723    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-lib-modules\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991764    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7s8\" (UniqueName: \"kubernetes.io/projected/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-kube-api-access-fj7s8\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991793    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-cni-cfg\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991820    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-xtables-lock\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991867    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c20d9dc5-69a3-46f9-bdd7-7a54def58eac-xtables-lock\") pod \"kube-proxy-rjsmj\" (UID: \"c20d9dc5-69a3-46f9-bdd7-7a54def58eac\") " pod="kube-system/kube-proxy-rjsmj"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991915    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c406577e-74d2-4d81-b8a4-c827a78e2d61-lib-modules\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:32 multinode-242095 kubelet[2285]: I0307 18:16:32.991948    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xq9v\" (UniqueName: \"kubernetes.io/projected/c406577e-74d2-4d81-b8a4-c827a78e2d61-kube-api-access-5xq9v\") pod \"kindnet-4sm84\" (UID: \"c406577e-74d2-4d81-b8a4-c827a78e2d61\") " pod="kube-system/kindnet-4sm84"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.941332    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.998223    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea1890f3-3928-474e-8b2d-10da6a0e9f14-tmp\") pod \"storage-provisioner\" (UID: \"ea1890f3-3928-474e-8b2d-10da6a0e9f14\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:33 multinode-242095 kubelet[2285]: I0307 18:16:33.998278    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/ea1890f3-3928-474e-8b2d-10da6a0e9f14-kube-api-access-j9kgp\") pod \"storage-provisioner\" (UID: \"ea1890f3-3928-474e-8b2d-10da6a0e9f14\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.423519    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.503139    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pllm\" (UniqueName: \"kubernetes.io/projected/17db7207-f2ce-4566-85fc-dc7e0eb65d09-kube-api-access-7pllm\") pod \"coredns-787d4945fb-fsll9\" (UID: \"17db7207-f2ce-4566-85fc-dc7e0eb65d09\") " pod="kube-system/coredns-787d4945fb-fsll9"
	Mar 07 18:16:34 multinode-242095 kubelet[2285]: I0307 18:16:34.503192    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17db7207-f2ce-4566-85fc-dc7e0eb65d09-config-volume\") pod \"coredns-787d4945fb-fsll9\" (UID: \"17db7207-f2ce-4566-85fc-dc7e0eb65d09\") " pod="kube-system/coredns-787d4945fb-fsll9"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.298565    2285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.315728    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rjsmj" podStartSLOduration=3.315683855 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:35.108304994 +0000 UTC m=+15.526106179" watchObservedRunningTime="2023-03-07 18:16:35.315683855 +0000 UTC m=+15.733485043"
	Mar 07 18:16:35 multinode-242095 kubelet[2285]: I0307 18:16:35.315911    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.315878703 pod.CreationTimestamp="2023-03-07 18:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:35.315381924 +0000 UTC m=+15.733183131" watchObservedRunningTime="2023-03-07 18:16:35.315878703 +0000 UTC m=+15.733679902"
	Mar 07 18:16:36 multinode-242095 kubelet[2285]: I0307 18:16:36.339765    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fsll9" podStartSLOduration=4.339720505 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:36.339542114 +0000 UTC m=+16.757343302" watchObservedRunningTime="2023-03-07 18:16:36.339720505 +0000 UTC m=+16.757521693"
	Mar 07 18:16:36 multinode-242095 kubelet[2285]: I0307 18:16:36.340117    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4sm84" podStartSLOduration=-9.2233720325147e+09 pod.CreationTimestamp="2023-03-07 18:16:32 +0000 UTC" firstStartedPulling="2023-03-07 18:16:33.82479639 +0000 UTC m=+14.242597570" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:16:36.325700598 +0000 UTC m=+16.743501786" watchObservedRunningTime="2023-03-07 18:16:36.340075894 +0000 UTC m=+16.757877126"
	Mar 07 18:16:40 multinode-242095 kubelet[2285]: I0307 18:16:40.402347    2285 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 07 18:16:40 multinode-242095 kubelet[2285]: I0307 18:16:40.402972    2285 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 07 18:16:49 multinode-242095 kubelet[2285]: I0307 18:16:49.407257    2285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f26daf9a048370abc6adb78a364048bff982cd7ec4bc2104a111cebea0a0ef"
	Mar 07 18:17:08 multinode-242095 kubelet[2285]: I0307 18:17:08.497076    2285 topology_manager.go:210] "Topology Admit Handler"
	Mar 07 18:17:08 multinode-242095 kubelet[2285]: I0307 18:17:08.607732    2285 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8n6h\" (UniqueName: \"kubernetes.io/projected/8887f3ec-66e6-4c54-9bd5-b93fe0e31681-kube-api-access-n8n6h\") pod \"busybox-6b86dd6d48-rfr2n\" (UID: \"8887f3ec-66e6-4c54-9bd5-b93fe0e31681\") " pod="default/busybox-6b86dd6d48-rfr2n"
	Mar 07 18:17:10 multinode-242095 kubelet[2285]: I0307 18:17:10.555078    2285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-rfr2n" podStartSLOduration=-9.223372034299732e+09 pod.CreationTimestamp="2023-03-07 18:17:08 +0000 UTC" firstStartedPulling="2023-03-07 18:17:09.083031242 +0000 UTC m=+49.500832426" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-07 18:17:10.554808963 +0000 UTC m=+50.972610151" watchObservedRunningTime="2023-03-07 18:17:10.555043611 +0000 UTC m=+50.972844798"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-242095 -n multinode-242095
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-242095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.90s)

                                                
                                    

Test pass (292/313)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.38
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.26.2/json-events 4.26
11 TestDownloadOnly/v1.26.2/preload-exists 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 1.7
19 TestBinaryMirror 1.2
20 TestOffline 64.83
22 TestAddons/Setup 100.96
24 TestAddons/parallel/Registry 14.96
25 TestAddons/parallel/Ingress 24.08
26 TestAddons/parallel/MetricsServer 5.85
27 TestAddons/parallel/HelmTiller 10.98
29 TestAddons/parallel/CSI 61.92
30 TestAddons/parallel/Headlamp 9.15
31 TestAddons/parallel/CloudSpanner 5.44
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 11.18
36 TestCertOptions 32.47
37 TestCertExpiration 250.25
38 TestDockerFlags 33.88
39 TestForceSystemdFlag 51.35
40 TestForceSystemdEnv 37.59
41 TestKVMDriverInstallOrUpdate 1.85
45 TestErrorSpam/setup 26.95
46 TestErrorSpam/start 1.16
47 TestErrorSpam/status 1.47
48 TestErrorSpam/pause 1.64
49 TestErrorSpam/unpause 1.61
50 TestErrorSpam/stop 2.5
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 44.47
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 48.19
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.06
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
62 TestFunctional/serial/CacheCmd/cache/add_local 0.95
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.49
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
67 TestFunctional/serial/CacheCmd/cache/delete 0.11
68 TestFunctional/serial/MinikubeKubectlCmd 0.12
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
70 TestFunctional/serial/ExtraConfig 44.68
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.12
73 TestFunctional/serial/LogsFileCmd 1.21
75 TestFunctional/parallel/ConfigCmd 0.42
76 TestFunctional/parallel/DashboardCmd 8.19
77 TestFunctional/parallel/DryRun 0.78
78 TestFunctional/parallel/InternationalLanguage 0.37
79 TestFunctional/parallel/StatusCmd 2.02
83 TestFunctional/parallel/ServiceCmdConnect 8.97
84 TestFunctional/parallel/AddonsCmd 0.21
85 TestFunctional/parallel/PersistentVolumeClaim 27.48
87 TestFunctional/parallel/SSHCmd 0.97
88 TestFunctional/parallel/CpCmd 2.43
89 TestFunctional/parallel/MySQL 25.57
90 TestFunctional/parallel/FileSync 0.68
91 TestFunctional/parallel/CertSync 3.64
95 TestFunctional/parallel/NodeLabels 0.06
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
99 TestFunctional/parallel/License 0.15
100 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
101 TestFunctional/parallel/DockerEnv/bash 2.14
102 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
103 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
104 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
105 TestFunctional/parallel/Version/short 0.06
106 TestFunctional/parallel/Version/components 0.85
107 TestFunctional/parallel/ProfileCmd/profile_not_create 0.84
108 TestFunctional/parallel/ProfileCmd/profile_list 0.66
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.79
110 TestFunctional/parallel/MountCmd/any-port 13.33
111 TestFunctional/parallel/ServiceCmd/List 0.66
112 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
113 TestFunctional/parallel/ServiceCmd/HTTPS 0.66
114 TestFunctional/parallel/ServiceCmd/Format 0.76
115 TestFunctional/parallel/ServiceCmd/URL 0.63
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.2
120 TestFunctional/parallel/MountCmd/specific-port 3.12
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.38
132 TestFunctional/parallel/ImageCommands/Setup 0.97
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.65
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.92
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.24
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.51
140 TestFunctional/delete_addon-resizer_images 0.16
141 TestFunctional/delete_my-image_image 0.06
142 TestFunctional/delete_minikube_cached_images 0.07
146 TestImageBuild/serial/NormalBuild 0.95
147 TestImageBuild/serial/BuildWithBuildArg 1.1
148 TestImageBuild/serial/BuildWithDockerIgnore 0.46
149 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.38
152 TestIngressAddonLegacy/StartLegacyK8sCluster 52.41
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.21
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.51
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.84
159 TestJSONOutput/start/Command 41.63
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 0.68
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 0.64
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 5.92
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.48
184 TestKicCustomNetwork/create_custom_network 30.42
185 TestKicCustomNetwork/use_default_bridge_network 30.23
186 TestKicExistingNetwork 30.27
187 TestKicCustomSubnet 30.47
188 TestKicStaticIP 30.09
189 TestMainNoArgs 0.05
190 TestMinikubeProfile 61.28
193 TestMountStart/serial/StartWithMountFirst 7.27
194 TestMountStart/serial/VerifyMountFirst 0.44
195 TestMountStart/serial/StartWithMountSecond 7
196 TestMountStart/serial/VerifyMountSecond 0.45
197 TestMountStart/serial/DeleteFirst 2.07
198 TestMountStart/serial/VerifyMountPostDelete 0.44
199 TestMountStart/serial/Stop 1.39
200 TestMountStart/serial/RestartStopped 7.84
201 TestMountStart/serial/VerifyMountPostStop 0.44
204 TestMultiNode/serial/FreshStart2Nodes 73.38
207 TestMultiNode/serial/AddNode 18.68
208 TestMultiNode/serial/ProfileList 0.5
209 TestMultiNode/serial/CopyFile 16.6
210 TestMultiNode/serial/StopNode 3.07
211 TestMultiNode/serial/StartAfterStop 12.67
212 TestMultiNode/serial/RestartKeepsNodes 94.25
213 TestMultiNode/serial/DeleteNode 6.11
214 TestMultiNode/serial/StopMultiNode 22.05
215 TestMultiNode/serial/RestartMultiNode 58.04
216 TestMultiNode/serial/ValidateNameConflict 31.07
221 TestPreload 135.51
223 TestScheduledStopUnix 101.31
224 TestSkaffold 61.35
226 TestInsufficientStorage 12.96
227 TestRunningBinaryUpgrade 91.14
229 TestKubernetesUpgrade 142.8
230 TestMissingContainerUpgrade 115.69
232 TestStoppedBinaryUpgrade/Setup 0.53
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
234 TestNoKubernetes/serial/StartWithK8s 48.4
235 TestStoppedBinaryUpgrade/Upgrade 81.46
236 TestNoKubernetes/serial/StartWithStopK8s 19.71
248 TestNoKubernetes/serial/Start 8.91
249 TestNoKubernetes/serial/VerifyK8sNotRunning 0.74
250 TestNoKubernetes/serial/ProfileList 3.76
251 TestNoKubernetes/serial/Stop 1.7
252 TestStoppedBinaryUpgrade/MinikubeLogs 1.9
253 TestNoKubernetes/serial/StartNoArgs 11.38
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.7
263 TestPause/serial/Start 41.95
264 TestPause/serial/SecondStartNoReconfiguration 44.2
265 TestNetworkPlugins/group/auto/Start 57.77
266 TestNetworkPlugins/group/kindnet/Start 55.13
267 TestPause/serial/Pause 0.87
268 TestPause/serial/VerifyStatus 0.55
269 TestPause/serial/Unpause 0.64
270 TestPause/serial/PauseAgain 0.81
271 TestPause/serial/DeletePaused 2.84
272 TestPause/serial/VerifyDeletedResources 1.2
273 TestNetworkPlugins/group/calico/Start 72.52
274 TestNetworkPlugins/group/auto/KubeletFlags 0.64
275 TestNetworkPlugins/group/auto/NetCatPod 10.26
276 TestNetworkPlugins/group/auto/DNS 0.18
277 TestNetworkPlugins/group/auto/Localhost 0.17
278 TestNetworkPlugins/group/auto/HairPin 0.17
279 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.59
281 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
282 TestNetworkPlugins/group/kindnet/DNS 0.2
283 TestNetworkPlugins/group/kindnet/Localhost 0.16
284 TestNetworkPlugins/group/kindnet/HairPin 0.18
285 TestNetworkPlugins/group/custom-flannel/Start 64.82
286 TestNetworkPlugins/group/false/Start 48.49
287 TestNetworkPlugins/group/calico/ControllerPod 5.02
288 TestNetworkPlugins/group/enable-default-cni/Start 45.45
289 TestNetworkPlugins/group/calico/KubeletFlags 0.65
290 TestNetworkPlugins/group/calico/NetCatPod 10.93
291 TestNetworkPlugins/group/calico/DNS 0.15
292 TestNetworkPlugins/group/calico/Localhost 0.16
293 TestNetworkPlugins/group/calico/HairPin 0.13
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.86
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.32
296 TestNetworkPlugins/group/false/KubeletFlags 0.57
297 TestNetworkPlugins/group/false/NetCatPod 11.23
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.74
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.34
300 TestNetworkPlugins/group/flannel/Start 60.71
301 TestNetworkPlugins/group/custom-flannel/DNS 0.29
302 TestNetworkPlugins/group/custom-flannel/Localhost 0.33
303 TestNetworkPlugins/group/custom-flannel/HairPin 0.32
304 TestNetworkPlugins/group/false/DNS 0.17
305 TestNetworkPlugins/group/false/Localhost 0.14
306 TestNetworkPlugins/group/false/HairPin 0.15
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
310 TestNetworkPlugins/group/bridge/Start 89.65
311 TestNetworkPlugins/group/kubenet/Start 50.52
313 TestStartStop/group/old-k8s-version/serial/FirstStart 124.94
314 TestNetworkPlugins/group/flannel/ControllerPod 5.02
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.88
316 TestNetworkPlugins/group/flannel/NetCatPod 10.38
317 TestNetworkPlugins/group/flannel/DNS 0.16
318 TestNetworkPlugins/group/flannel/Localhost 0.13
319 TestNetworkPlugins/group/flannel/HairPin 0.16
320 TestNetworkPlugins/group/kubenet/KubeletFlags 0.52
321 TestNetworkPlugins/group/kubenet/NetCatPod 11.23
322 TestNetworkPlugins/group/kubenet/DNS 0.17
323 TestNetworkPlugins/group/kubenet/Localhost 0.16
324 TestNetworkPlugins/group/kubenet/HairPin 0.16
326 TestStartStop/group/no-preload/serial/FirstStart 53.27
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.52
328 TestNetworkPlugins/group/bridge/NetCatPod 13.23
330 TestStartStop/group/embed-certs/serial/FirstStart 48.2
331 TestNetworkPlugins/group/bridge/DNS 0.19
332 TestNetworkPlugins/group/bridge/Localhost 0.19
333 TestNetworkPlugins/group/bridge/HairPin 0.19
334 TestStartStop/group/no-preload/serial/DeployApp 8.35
335 TestStartStop/group/old-k8s-version/serial/DeployApp 7.5
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
337 TestStartStop/group/no-preload/serial/Stop 11.05
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.82
340 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
341 TestStartStop/group/old-k8s-version/serial/Stop 10.93
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
343 TestStartStop/group/no-preload/serial/SecondStart 558.49
344 TestStartStop/group/embed-certs/serial/DeployApp 8.4
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
346 TestStartStop/group/old-k8s-version/serial/SecondStart 337.98
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
348 TestStartStop/group/embed-certs/serial/Stop 10.94
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
350 TestStartStop/group/embed-certs/serial/SecondStart 315.61
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.38
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.73
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.14
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 559.92
356 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.02
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
360 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.54
361 TestStartStop/group/embed-certs/serial/Pause 3.88
362 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.58
363 TestStartStop/group/old-k8s-version/serial/Pause 4.18
365 TestStartStop/group/newest-cni/serial/FirstStart 42.17
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
368 TestStartStop/group/newest-cni/serial/Stop 5.9
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
370 TestStartStop/group/newest-cni/serial/SecondStart 27.91
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
374 TestStartStop/group/newest-cni/serial/Pause 3.59
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.51
378 TestStartStop/group/no-preload/serial/Pause 3.53
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
382 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.38
x
+
TestDownloadOnly/v1.16.0/json-events (4.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-429940 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-429940 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.383205409s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-429940
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-429940: exit status 85 (68.97814ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-429940 | jenkins | v1.29.0 | 07 Mar 23 18:01 UTC |          |
	|         | -p download-only-429940        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:01:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:01:14.862128  642755 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:01:14.862305  642755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:14.862313  642755 out.go:309] Setting ErrFile to fd 2...
	I0307 18:01:14.862318  642755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:14.862415  642755 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	W0307 18:01:14.862514  642755 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15985-636026/.minikube/config/config.json: open /home/jenkins/minikube-integration/15985-636026/.minikube/config/config.json: no such file or directory
	I0307 18:01:14.863078  642755 out.go:303] Setting JSON to true
	I0307 18:01:14.864465  642755 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6226,"bootTime":1678205849,"procs":685,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:01:14.864552  642755 start.go:135] virtualization: kvm guest
	I0307 18:01:14.867177  642755 out.go:97] [download-only-429940] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0307 18:01:14.867299  642755 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15985-636026/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 18:01:14.868830  642755 out.go:169] MINIKUBE_LOCATION=15985
	I0307 18:01:14.867412  642755 notify.go:220] Checking for updates...
	I0307 18:01:14.871725  642755 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:01:14.873150  642755 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:01:14.874501  642755 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:01:14.875955  642755 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0307 18:01:14.878700  642755 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:01:14.878972  642755 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:01:14.951358  642755 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0307 18:01:14.951499  642755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:01:15.070779  642755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-07 18:01:15.060653596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:01:15.070886  642755 docker.go:294] overlay module found
	I0307 18:01:15.073086  642755 out.go:97] Using the docker driver based on user configuration
	I0307 18:01:15.073118  642755 start.go:296] selected driver: docker
	I0307 18:01:15.073126  642755 start.go:857] validating driver "docker" against <nil>
	I0307 18:01:15.073229  642755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:01:15.192011  642755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-07 18:01:15.183386256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:01:15.192141  642755 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 18:01:15.192794  642755 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0307 18:01:15.193003  642755 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:01:15.195186  642755 out.go:169] Using Docker driver with root privileges
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-429940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-429940 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-429940 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.259178521s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-429940
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-429940: exit status 85 (71.384269ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-429940 | jenkins | v1.29.0 | 07 Mar 23 18:01 UTC |          |
	|         | -p download-only-429940        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-429940 | jenkins | v1.29.0 | 07 Mar 23 18:01 UTC |          |
	|         | -p download-only-429940        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:01:19
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:01:19.316914  643001 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:01:19.317154  643001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:19.317164  643001 out.go:309] Setting ErrFile to fd 2...
	I0307 18:01:19.317168  643001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:19.317303  643001 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	W0307 18:01:19.317445  643001 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15985-636026/.minikube/config/config.json: open /home/jenkins/minikube-integration/15985-636026/.minikube/config/config.json: no such file or directory
	I0307 18:01:19.317904  643001 out.go:303] Setting JSON to true
	I0307 18:01:19.319306  643001 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6231,"bootTime":1678205849,"procs":683,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:01:19.319384  643001 start.go:135] virtualization: kvm guest
	I0307 18:01:19.321671  643001 out.go:97] [download-only-429940] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:01:19.323235  643001 out.go:169] MINIKUBE_LOCATION=15985
	I0307 18:01:19.321823  643001 notify.go:220] Checking for updates...
	I0307 18:01:19.326002  643001 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:01:19.327468  643001 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:01:19.328823  643001 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:01:19.330147  643001 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-429940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-429940
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.7s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-323001 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-323001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-323001
--- PASS: TestDownloadOnlyKic (1.70s)

                                                
                                    
x
+
TestBinaryMirror (1.2s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-684115 --alsologtostderr --binary-mirror http://127.0.0.1:40353 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-684115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-684115
--- PASS: TestBinaryMirror (1.20s)

                                                
                                    
x
+
TestOffline (64.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-148961 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-148961 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m0.506151969s)
helpers_test.go:175: Cleaning up "offline-docker-148961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-148961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-148961: (4.327689139s)
--- PASS: TestOffline (64.83s)

                                                
                                    
x
+
TestAddons/Setup (100.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-581908 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-581908 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m40.956401778s)
--- PASS: TestAddons/Setup (100.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 11.430396ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xq8xq" [dae23947-4d79-4fe0-a965-24966a9d94b2] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007479573s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ghg7d" [dd6ecff8-8a96-4ac4-8cc7-c7c392072aa3] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007311751s
addons_test.go:305: (dbg) Run:  kubectl --context addons-581908 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-581908 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-581908 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.947591299s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 ip
2023/03/07 18:03:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-581908 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-581908 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (3.090422887s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-581908 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-581908 replace --force -f testdata/nginx-ingress-v1.yaml: (1.02846069s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-581908 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d4dd649e-a96a-4cc1-b350-961211c5a2eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d4dd649e-a96a-4cc1-b350-961211c5a2eb] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006427652s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-581908 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-581908 addons disable ingress --alsologtostderr -v=1: (7.699358069s)
--- PASS: TestAddons/parallel/Ingress (24.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 11.357704ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-8h8ng" [a930960e-7909-43a8-a633-3a9fac526967] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007207325s
addons_test.go:380: (dbg) Run:  kubectl --context addons-581908 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.152627ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-qxznr" [040a5577-ce40-479e-accd-0c11b2ac0864] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008893487s
addons_test.go:438: (dbg) Run:  kubectl --context addons-581908 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-581908 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.361444498s)
addons_test.go:443: kubectl --context addons-581908 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.424456ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-581908 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-581908 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fc5f5a32-422a-4b64-9d05-3f5ca7c21a31] Pending
helpers_test.go:344: "task-pv-pod" [fc5f5a32-422a-4b64-9d05-3f5ca7c21a31] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fc5f5a32-422a-4b64-9d05-3f5ca7c21a31] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.006230465s
addons_test.go:549: (dbg) Run:  kubectl --context addons-581908 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-581908 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-581908 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-581908 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-581908 delete pod task-pv-pod: (1.287691422s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-581908 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-581908 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-581908 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-581908 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6f26ce62-a8f8-4aa6-9295-bc7927668529] Pending
helpers_test.go:344: "task-pv-pod-restore" [6f26ce62-a8f8-4aa6-9295-bc7927668529] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6f26ce62-a8f8-4aa6-9295-bc7927668529] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.010611505s
addons_test.go:591: (dbg) Run:  kubectl --context addons-581908 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-581908 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-581908 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-581908 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.471893662s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-581908 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-581908 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-581908 --alsologtostderr -v=1: (1.140994207s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-qq7d7" [9b882037-bc20-48ef-ade4-b72fa46e0767] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-qq7d7" [9b882037-bc20-48ef-ade4-b72fa46e0767] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.007485336s
--- PASS: TestAddons/parallel/Headlamp (9.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-pp6hs" [01321cd6-c82b-4a5e-b34a-dbc26c798e76] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006473365s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-581908
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-581908 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-581908 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-581908
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-581908: (10.928845231s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-581908
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-581908
--- PASS: TestAddons/StoppedEnableDisable (11.18s)

                                                
                                    
x
+
TestCertOptions (32.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-779892 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-779892 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.623125312s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-779892 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-779892 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-779892 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-779892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-779892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-779892: (2.802586581s)
--- PASS: TestCertOptions (32.47s)

                                                
                                    
x
+
TestCertExpiration (250.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-687603 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-687603 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.611341171s)
E0307 18:29:13.475896  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-687603 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0307 18:32:13.881750  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-687603 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.739266937s)
helpers_test.go:175: Cleaning up "cert-expiration-687603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-687603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-687603: (2.900339129s)
--- PASS: TestCertExpiration (250.25s)

                                                
                                    
x
+
TestDockerFlags (33.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-291692 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-291692 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.667486523s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-291692 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-291692 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-291692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-291692
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-291692: (3.10738832s)
--- PASS: TestDockerFlags (33.88s)

                                                
                                    
x
+
TestForceSystemdFlag (51.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-193212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-193212 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.495298725s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-193212 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-193212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-193212
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-193212: (3.267745718s)
--- PASS: TestForceSystemdFlag (51.35s)

                                                
                                    
x
+
TestForceSystemdEnv (37.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-964004 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-964004 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.918578118s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-964004 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-964004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-964004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-964004: (3.995413107s)
--- PASS: TestForceSystemdEnv (37.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.85s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.85s)

                                                
                                    
x
+
TestErrorSpam/setup (26.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-301828 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-301828 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-301828 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-301828 --driver=docker  --container-runtime=docker: (26.950696751s)
--- PASS: TestErrorSpam/setup (26.95s)

                                                
                                    
x
+
TestErrorSpam/start (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 start --dry-run
--- PASS: TestErrorSpam/start (1.16s)

                                                
                                    
x
+
TestErrorSpam/status (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 status
--- PASS: TestErrorSpam/status (1.47s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (2.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 stop: (2.126460308s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-301828 --log_dir /tmp/nospam-301828 stop
--- PASS: TestErrorSpam/stop (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/15985-636026/.minikube/files/etc/test/nested/copy/642743/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-706383 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (44.471515701s)
--- PASS: TestFunctional/serial/StartWithProxy (44.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-706383 --alsologtostderr -v=8: (48.188416631s)
functional_test.go:658: soft start took 48.18910201s for "functional-706383" cluster.
--- PASS: TestFunctional/serial/SoftStart (48.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-706383 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-706383 /tmp/TestFunctionalserialCacheCmdcacheadd_local2457339520/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache add minikube-local-cache-test:functional-706383
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache delete minikube-local-cache-test:functional-706383
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-706383
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (499.769734ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 kubectl -- --context functional-706383 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-706383 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-706383 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.67959371s)
functional_test.go:756: restart took 44.679787069s for "functional-706383" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-706383 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 logs: (1.124781687s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 logs --file /tmp/TestFunctionalserialLogsFileCmd2877193865/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 logs --file /tmp/TestFunctionalserialLogsFileCmd2877193865/001/logs.txt: (1.212836407s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 config get cpus: exit status 14 (70.931468ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 config get cpus: exit status 14 (55.884199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-706383 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-706383 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 705459: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-706383 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (320.788738ms)

                                                
                                                
-- stdout --
	* [functional-706383] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:07:56.640784  699186 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:07:56.640927  699186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:07:56.640938  699186 out.go:309] Setting ErrFile to fd 2...
	I0307 18:07:56.640943  699186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:07:56.641059  699186 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:07:56.641786  699186 out.go:303] Setting JSON to false
	I0307 18:07:56.643524  699186 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6628,"bootTime":1678205849,"procs":478,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:07:56.643600  699186 start.go:135] virtualization: kvm guest
	I0307 18:07:56.646851  699186 out.go:177] * [functional-706383] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:07:56.649558  699186 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:07:56.649816  699186 notify.go:220] Checking for updates...
	I0307 18:07:56.651382  699186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:07:56.653561  699186 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:07:56.655496  699186 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:07:56.657243  699186 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:07:56.658968  699186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:07:56.660908  699186 config.go:182] Loaded profile config "functional-706383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:07:56.661364  699186 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:07:56.753400  699186 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0307 18:07:56.753543  699186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:07:56.885737  699186 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-07 18:07:56.876376191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:07:56.885833  699186 docker.go:294] overlay module found
	I0307 18:07:56.890217  699186 out.go:177] * Using the docker driver based on existing profile
	I0307 18:07:56.891773  699186 start.go:296] selected driver: docker
	I0307 18:07:56.891801  699186 start.go:857] validating driver "docker" against &{Name:functional-706383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-706383 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:07:56.891948  699186 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:07:56.894927  699186 out.go:177] 
	W0307 18:07:56.896511  699186 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 18:07:56.898248  699186 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706383 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-706383 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (365.664914ms)

                                                
                                                
-- stdout --
	* [functional-706383] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:07:57.435774  699768 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:07:57.435968  699768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:07:57.435982  699768 out.go:309] Setting ErrFile to fd 2...
	I0307 18:07:57.435989  699768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:07:57.436238  699768 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:07:57.436956  699768 out.go:303] Setting JSON to false
	I0307 18:07:57.438154  699768 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6629,"bootTime":1678205849,"procs":480,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:07:57.438215  699768 start.go:135] virtualization: kvm guest
	I0307 18:07:57.441758  699768 out.go:177] * [functional-706383] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0307 18:07:57.443615  699768 notify.go:220] Checking for updates...
	I0307 18:07:57.450669  699768 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:07:57.452472  699768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:07:57.455635  699768 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	I0307 18:07:57.457413  699768 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	I0307 18:07:57.459023  699768 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:07:57.460630  699768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:07:57.462675  699768 config.go:182] Loaded profile config "functional-706383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:07:57.463318  699768 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:07:57.548960  699768 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0307 18:07:57.549070  699768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:07:57.715546  699768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-07 18:07:57.704311278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:07:57.715698  699768 docker.go:294] overlay module found
	I0307 18:07:57.718302  699768 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0307 18:07:57.719708  699768 start.go:296] selected driver: docker
	I0307 18:07:57.719737  699768 start.go:857] validating driver "docker" against &{Name:functional-706383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-706383 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:07:57.719857  699768 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:07:57.722418  699768 out.go:177] 
	W0307 18:07:57.723890  699768 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 18:07:57.725294  699768 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-706383 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-706383 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-n8ss7" [8e43732f-2573-442e-a35a-c7f095d80a84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-n8ss7" [8e43732f-2573-442e-a35a-c7f095d80a84] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008631136s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31308
functional_test.go:1673: http://192.168.49.2:31308: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-n8ss7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31308
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [02d99665-220a-48cd-a592-565f7e7d32df] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007466164s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-706383 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-706383 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-706383 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-706383 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-706383 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [11464c51-adab-4a55-a2d0-a754ff8320c4] Pending
helpers_test.go:344: "sp-pod" [11464c51-adab-4a55-a2d0-a754ff8320c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [11464c51-adab-4a55-a2d0-a754ff8320c4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006813203s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-706383 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-706383 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-706383 delete -f testdata/storage-provisioner/pod.yaml: (1.89670305s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-706383 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f4fb9f09-1a50-4ac6-bef0-cfded68e8403] Pending
helpers_test.go:344: "sp-pod" [f4fb9f09-1a50-4ac6-bef0-cfded68e8403] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0307 18:08:19.123694  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [f4fb9f09-1a50-4ac6-bef0-cfded68e8403] Running
2023/03/07 18:08:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.033391907s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-706383 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh -n functional-706383 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 cp functional-706383:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd496024363/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh -n functional-706383 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-706383 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-2sgvd" [3c66e254-d4da-4b18-9e47-73d73cb0aa2a] Pending
helpers_test.go:344: "mysql-888f84dd9-2sgvd" [3c66e254-d4da-4b18-9e47-73d73cb0aa2a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-2sgvd" [3c66e254-d4da-4b18-9e47-73d73cb0aa2a] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.025911675s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;": exit status 1 (253.686042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0307 18:08:09.520706  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test.go:1802: (dbg) Run:  kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;"
E0307 18:08:10.161174  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;": exit status 1 (218.677511ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;": exit status 1 (144.796916ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0307 18:08:11.442716  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test.go:1802: (dbg) Run:  kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;": exit status 1 (154.913963ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0307 18:08:14.003112  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test.go:1802: (dbg) Run:  kubectl --context functional-706383 exec mysql-888f84dd9-2sgvd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/642743/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /etc/test/nested/copy/642743/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/642743.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /etc/ssl/certs/642743.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/642743.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /usr/share/ca-certificates/642743.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/6427432.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /etc/ssl/certs/6427432.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/6427432.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /usr/share/ca-certificates/6427432.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-706383 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh "sudo systemctl is-active crio": exit status 1 (600.930737ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-706383 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-706383 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-g2lxn" [a016d481-ec28-4624-914a-08752b07efef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-g2lxn" [a016d481-ec28-4624-914a-08752b07efef] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.013993751s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-706383 docker-env) && out/minikube-linux-amd64 status -p functional-706383"
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-706383 docker-env) && out/minikube-linux-amd64 status -p functional-706383": (1.321085431s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-706383 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "598.554263ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "58.144409ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "736.672726ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "55.281444ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706383 /tmp/TestFunctionalparallelMountCmdany-port1311089330/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1678212475825847916" to /tmp/TestFunctionalparallelMountCmdany-port1311089330/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1678212475825847916" to /tmp/TestFunctionalparallelMountCmdany-port1311089330/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1678212475825847916" to /tmp/TestFunctionalparallelMountCmdany-port1311089330/001/test-1678212475825847916
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (672.240248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 18:07 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 18:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 18:07 test-1678212475825847916
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh cat /mount-9p/test-1678212475825847916
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-706383 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [234e01ad-9ecc-4459-8f9d-87f05a561335] Pending
helpers_test.go:344: "busybox-mount" [234e01ad-9ecc-4459-8f9d-87f05a561335] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [234e01ad-9ecc-4459-8f9d-87f05a561335] Running
helpers_test.go:344: "busybox-mount" [234e01ad-9ecc-4459-8f9d-87f05a561335] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [234e01ad-9ecc-4459-8f9d-87f05a561335] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.008608137s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-706383 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo umount -f /mount-9p"
E0307 18:08:08.882272  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:08:08.887978  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:08:08.898287  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:08:08.918591  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:08:08.958956  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706383 /tmp/TestFunctionalparallelMountCmdany-port1311089330/001:/mount-9p --alsologtostderr -v=1] ...
E0307 18:08:09.039287  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service list -o json
functional_test.go:1492: Took "623.698617ms" to run "out/minikube-linux-amd64 -p functional-706383 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:32015
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:32015
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-706383 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-706383 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [45bbb235-7def-46a3-8dcb-a82bdd981d42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [45bbb235-7def-46a3-8dcb-a82bdd981d42] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.007174162s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706383 /tmp/TestFunctionalparallelMountCmdspecific-port479041401/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p"
E0307 18:08:09.200164  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (610.066981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706383 /tmp/TestFunctionalparallelMountCmdspecific-port479041401/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh "sudo umount -f /mount-9p": exit status 1 (463.504395ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-706383 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706383 /tmp/TestFunctionalparallelMountCmdspecific-port479041401/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-706383 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.99.230.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-706383 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706383 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-706383
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-706383
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706383 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 904b8cb13b932 | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.26.2           | 6f64e7135a6ec | 65.6MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-706383 | 2a5f53122b4f8 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.2           | 240e201d5b0d8 | 123MB  |
| registry.k8s.io/kube-scheduler              | v1.26.2           | db8f409d9a5d7 | 56.3MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-706383 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.26.2           | 63d3239c3c159 | 134MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706383 image ls --format json:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"2a5f53122b4f8b0584d73e71b7f54713e245fcc0d5cc5cddfab69826a09cf29d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-706383"],"size":"30"},{"id":"904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDigests
":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"65599999"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-706383"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr
.io/pause:latest"],"size":"240000"},{"id":"db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"56300000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"134000000"},{"id":"240e201d5b0d8c6ae66
764165080c22834e3a9fed050cf5780211d973644ac1e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"123000000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706383 image ls --format yaml:
- id: 904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "134000000"
- id: 6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-706383
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2a5f53122b4f8b0584d73e71b7f54713e245fcc0d5cc5cddfab69826a09cf29d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-706383
size: "30"
- id: 240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "123000000"
- id: db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "56300000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706383 ssh pgrep buildkitd: exit status 1 (543.094664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image build -t localhost/my-image:functional-706383 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 image build -t localhost/my-image:functional-706383 testdata/build: (1.549289807s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706383 image build -t localhost/my-image:functional-706383 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3ee0df2ae7ae
Removing intermediate container 3ee0df2ae7ae
---> b6319c04394b
Step 3/3 : ADD content.txt /
---> e306970c47b6
Successfully built e306970c47b6
Successfully tagged localhost/my-image:functional-706383
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706383 image build -t localhost/my-image:functional-706383 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-706383
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383: (3.357348194s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383: (2.369940609s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 image load --daemon gcr.io/google-containers/addon-resizer:functional-706383: (2.810535725s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
E0307 18:08:29.364623  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image save gcr.io/google-containers/addon-resizer:functional-706383 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image rm gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-706383 image save --daemon gcr.io/google-containers/addon-resizer:functional-706383
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-706383 image save --daemon gcr.io/google-containers/addon-resizer:functional-706383: (2.379376805s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-706383
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-706383
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-706383
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-706383
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-869184
--- PASS: TestImageBuild/serial/NormalBuild (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869184
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869184: (1.096757074s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-869184
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-869184
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (52.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-437641 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0307 18:09:30.806627  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-437641 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (52.405411474s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (52.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons enable ingress --alsologtostderr -v=5: (11.209203916s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-437641 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-437641 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.975627333s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-437641 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-437641 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6a577a20-3af1-4422-babc-6f0c38e13434] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6a577a20-3af1-4422-babc-6f0c38e13434] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.00699828s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-437641 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons disable ingress-dns --alsologtostderr -v=1: (13.061144336s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons disable ingress --alsologtostderr -v=1
E0307 18:10:52.727571  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-437641 addons disable ingress --alsologtostderr -v=1: (7.315286234s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-909751 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-909751 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.62643201s)
--- PASS: TestJSONOutput/start/Command (41.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-909751 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-909751 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-909751 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-909751 --output=json --user=testUser: (5.924560357s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.48s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-290810 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-290810 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.531137ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"79d30b4b-d458-4eb7-a3a7-fd630b2e4bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-290810] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3362e9d4-50f7-492e-add9-d66e5843039a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15985"}}
	{"specversion":"1.0","id":"b965e9db-796d-40d4-96f1-be8571c90a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ef787079-a467-404a-9da6-7bd423d5dcad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig"}}
	{"specversion":"1.0","id":"b2def342-16a1-4a0b-b24d-58fa5e81911b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube"}}
	{"specversion":"1.0","id":"2b9c2723-197d-41cd-ab3a-5d7841f04275","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7cf5913f-243a-4ab3-8019-151b78064212","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aed6e3fb-bed2-479d-af57-a3083855ebb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-290810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-290810
--- PASS: TestErrorJSONOutput (0.48s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-431914 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-431914 --network=: (27.589505265s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-431914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-431914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-431914: (2.759200356s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-620690 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-620690 --network=bridge: (27.637664191s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-620690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-620690
E0307 18:12:50.430066  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.435356  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.445686  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.466000  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.506425  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.587037  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:50.747327  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:51.067926  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:12:51.709004  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-620690: (2.519486653s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.23s)

                                                
                                    
x
+
TestKicExistingNetwork (30.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0307 18:12:52.989640  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-834022 --network=existing-network
E0307 18:12:55.550045  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:13:00.671062  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:13:08.882293  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:13:10.911911  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-834022 --network=existing-network: (27.39183224s)
helpers_test.go:175: Cleaning up "existing-network-834022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-834022
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-834022: (2.435831905s)
--- PASS: TestKicExistingNetwork (30.27s)

                                                
                                    
x
+
TestKicCustomSubnet (30.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-024300 --subnet=192.168.60.0/24
E0307 18:13:31.392798  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:13:36.567965  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-024300 --subnet=192.168.60.0/24: (28.114841959s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-024300 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-024300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-024300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-024300: (2.293611457s)
--- PASS: TestKicCustomSubnet (30.47s)

                                                
                                    
x
+
TestKicStaticIP (30.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-161222 --static-ip=192.168.200.200
E0307 18:14:12.354141  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-161222 --static-ip=192.168.200.200: (27.149204307s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-161222 ip
helpers_test.go:175: Cleaning up "static-ip-161222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-161222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-161222: (2.698554512s)
--- PASS: TestKicStaticIP (30.09s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (61.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-421668 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-421668 --driver=docker  --container-runtime=docker: (27.564192693s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-424778 --driver=docker  --container-runtime=docker
E0307 18:15:17.630053  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.635354  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.645610  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.665864  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.706121  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.786420  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:15:17.947339  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-424778 --driver=docker  --container-runtime=docker: (26.638823481s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-421668
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
E0307 18:15:18.267793  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-424778
E0307 18:15:18.908884  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-424778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-424778
E0307 18:15:20.190029  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-424778: (2.712901754s)
helpers_test.go:175: Cleaning up "first-421668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-421668
E0307 18:15:22.750527  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-421668: (2.699750665s)
--- PASS: TestMinikubeProfile (61.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-794899 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0307 18:15:27.871641  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-794899 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.268219643s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-794899 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-811521 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0307 18:15:34.275318  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:15:38.112152  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-811521 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.997515891s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-811521 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-794899 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-794899 --alsologtostderr -v=5: (2.069747596s)
--- PASS: TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-811521 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-811521
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-811521: (1.387069464s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-811521
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-811521: (6.842385172s)
--- PASS: TestMountStart/serial/RestartStopped (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-811521 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-242095 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0307 18:15:58.592364  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:16:39.553332  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-242095 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.563029075s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-242095 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-242095 -v 3 --alsologtostderr: (17.594636166s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr: (1.087716455s)
--- PASS: TestMultiNode/serial/AddNode (18.68s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 status --output json --alsologtostderr: (1.089536025s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp testdata/cp-test.txt multinode-242095:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006715403/001/cp-test_multinode-242095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095:/home/docker/cp-test.txt multinode-242095-m02:/home/docker/cp-test_multinode-242095_multinode-242095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test_multinode-242095_multinode-242095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095:/home/docker/cp-test.txt multinode-242095-m03:/home/docker/cp-test_multinode-242095_multinode-242095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test_multinode-242095_multinode-242095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp testdata/cp-test.txt multinode-242095-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006715403/001/cp-test_multinode-242095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m02:/home/docker/cp-test.txt multinode-242095:/home/docker/cp-test_multinode-242095-m02_multinode-242095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test_multinode-242095-m02_multinode-242095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m02:/home/docker/cp-test.txt multinode-242095-m03:/home/docker/cp-test_multinode-242095-m02_multinode-242095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test_multinode-242095-m02_multinode-242095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp testdata/cp-test.txt multinode-242095-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006715403/001/cp-test_multinode-242095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m03:/home/docker/cp-test.txt multinode-242095:/home/docker/cp-test_multinode-242095-m03_multinode-242095.txt
E0307 18:17:50.429440  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095 "sudo cat /home/docker/cp-test_multinode-242095-m03_multinode-242095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 cp multinode-242095-m03:/home/docker/cp-test.txt multinode-242095-m02:/home/docker/cp-test_multinode-242095-m03_multinode-242095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 ssh -n multinode-242095-m02 "sudo cat /home/docker/cp-test_multinode-242095-m03_multinode-242095-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 node stop m03: (1.39768553s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-242095 status: exit status 7 (835.331436ms)

                                                
                                                
-- stdout --
	multinode-242095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-242095-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-242095-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr: exit status 7 (836.235328ms)

                                                
                                                
-- stdout --
	multinode-242095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-242095-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-242095-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:17:55.578828  812028 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:17:55.579092  812028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:17:55.579106  812028 out.go:309] Setting ErrFile to fd 2...
	I0307 18:17:55.579114  812028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:17:55.579421  812028 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:17:55.579660  812028 out.go:303] Setting JSON to false
	I0307 18:17:55.579699  812028 mustload.go:65] Loading cluster: multinode-242095
	I0307 18:17:55.579793  812028 notify.go:220] Checking for updates...
	I0307 18:17:55.580221  812028 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:17:55.580243  812028 status.go:255] checking status of multinode-242095 ...
	I0307 18:17:55.580781  812028 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:17:55.646723  812028 status.go:330] multinode-242095 host status = "Running" (err=<nil>)
	I0307 18:17:55.646749  812028 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:17:55.646998  812028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095
	I0307 18:17:55.711625  812028 host.go:66] Checking if "multinode-242095" exists ...
	I0307 18:17:55.711980  812028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:17:55.712035  812028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095
	I0307 18:17:55.775867  812028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095/id_rsa Username:docker}
	I0307 18:17:55.855941  812028 ssh_runner.go:195] Run: systemctl --version
	I0307 18:17:55.859745  812028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:55.868352  812028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:17:55.988262  812028 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-07 18:17:55.980033158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0307 18:17:55.988860  812028 kubeconfig.go:92] found "multinode-242095" server: "https://192.168.58.2:8443"
	I0307 18:17:55.988894  812028 api_server.go:165] Checking apiserver status ...
	I0307 18:17:55.988934  812028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:17:55.998100  812028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I0307 18:17:56.005295  812028 api_server.go:181] apiserver freezer: "11:freezer:/docker/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/kubepods/burstable/pode6407eb55a1944937cba3e31bce696d3/3a83d434102f045f558368b9914aba3062f5349f839a7049abd75f8e2d0f7b11"
	I0307 18:17:56.005349  812028 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d1953c0fdb5726ad5ee16d1f0882a8fb8e7e2e186e6ad82452bb569ebb281614/kubepods/burstable/pode6407eb55a1944937cba3e31bce696d3/3a83d434102f045f558368b9914aba3062f5349f839a7049abd75f8e2d0f7b11/freezer.state
	I0307 18:17:56.011547  812028 api_server.go:203] freezer state: "THAWED"
	I0307 18:17:56.011574  812028 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 18:17:56.016328  812028 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 18:17:56.016350  812028 status.go:421] multinode-242095 apiserver status = Running (err=<nil>)
	I0307 18:17:56.016359  812028 status.go:257] multinode-242095 status: &{Name:multinode-242095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:17:56.016375  812028 status.go:255] checking status of multinode-242095-m02 ...
	I0307 18:17:56.016584  812028 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:17:56.079485  812028 status.go:330] multinode-242095-m02 host status = "Running" (err=<nil>)
	I0307 18:17:56.079517  812028 host.go:66] Checking if "multinode-242095-m02" exists ...
	I0307 18:17:56.079786  812028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-242095-m02
	I0307 18:17:56.141674  812028 host.go:66] Checking if "multinode-242095-m02" exists ...
	I0307 18:17:56.141921  812028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:17:56.141959  812028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242095-m02
	I0307 18:17:56.204564  812028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15985-636026/.minikube/machines/multinode-242095-m02/id_rsa Username:docker}
	I0307 18:17:56.287978  812028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:17:56.297039  812028 status.go:257] multinode-242095-m02 status: &{Name:multinode-242095-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:17:56.297072  812028 status.go:255] checking status of multinode-242095-m03 ...
	I0307 18:17:56.297304  812028 cli_runner.go:164] Run: docker container inspect multinode-242095-m03 --format={{.State.Status}}
	I0307 18:17:56.364357  812028 status.go:330] multinode-242095-m03 host status = "Stopped" (err=<nil>)
	I0307 18:17:56.364379  812028 status.go:343] host is not running, skipping remaining checks
	I0307 18:17:56.364395  812028 status.go:257] multinode-242095-m03 status: &{Name:multinode-242095-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 node start m03 --alsologtostderr
E0307 18:18:01.474485  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 node start m03 --alsologtostderr: (11.453040257s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status
E0307 18:18:08.882000  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 status: (1.083259961s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-242095
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-242095
E0307 18:18:18.115738  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-242095: (22.9552118s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-242095 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-242095 --wait=true -v=8 --alsologtostderr: (1m11.198908623s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-242095
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 node delete m03: (5.151628981s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-242095 stop: (21.68439386s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-242095 status: exit status 7 (182.5ms)

                                                
                                                
-- stdout --
	multinode-242095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-242095-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr: exit status 7 (181.536806ms)

                                                
                                                
-- stdout --
	multinode-242095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-242095-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:20:11.313255  833835 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:20:11.313819  833835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:20:11.313835  833835 out.go:309] Setting ErrFile to fd 2...
	I0307 18:20:11.313844  833835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:20:11.314087  833835 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-636026/.minikube/bin
	I0307 18:20:11.314421  833835 out.go:303] Setting JSON to false
	I0307 18:20:11.314540  833835 mustload.go:65] Loading cluster: multinode-242095
	I0307 18:20:11.314632  833835 notify.go:220] Checking for updates...
	I0307 18:20:11.315372  833835 config.go:182] Loaded profile config "multinode-242095": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 18:20:11.315391  833835 status.go:255] checking status of multinode-242095 ...
	I0307 18:20:11.315777  833835 cli_runner.go:164] Run: docker container inspect multinode-242095 --format={{.State.Status}}
	I0307 18:20:11.381940  833835 status.go:330] multinode-242095 host status = "Stopped" (err=<nil>)
	I0307 18:20:11.381973  833835 status.go:343] host is not running, skipping remaining checks
	I0307 18:20:11.381981  833835 status.go:257] multinode-242095 status: &{Name:multinode-242095 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:20:11.382012  833835 status.go:255] checking status of multinode-242095-m02 ...
	I0307 18:20:11.382262  833835 cli_runner.go:164] Run: docker container inspect multinode-242095-m02 --format={{.State.Status}}
	I0307 18:20:11.445965  833835 status.go:330] multinode-242095-m02 host status = "Stopped" (err=<nil>)
	I0307 18:20:11.445997  833835 status.go:343] host is not running, skipping remaining checks
	I0307 18:20:11.446004  833835 status.go:257] multinode-242095-m02 status: &{Name:multinode-242095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-242095 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0307 18:20:17.630565  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:20:45.314826  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-242095 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.080070416s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-242095 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-242095
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-242095-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-242095-m02 --driver=docker  --container-runtime=docker: exit status 14 (70.997352ms)

                                                
                                                
-- stdout --
	* [multinode-242095-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-242095-m02' is duplicated with machine name 'multinode-242095-m02' in profile 'multinode-242095'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-242095-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-242095-m03 --driver=docker  --container-runtime=docker: (27.800867849s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-242095
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-242095: exit status 80 (422.860715ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-242095
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-242095-m03 already exists in multinode-242095-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-242095-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-242095-m03: (2.724454456s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.07s)

                                                
                                    
x
+
TestPreload (135.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (53.6483858s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-105598 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-105598
E0307 18:22:50.429763  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-105598: (10.86842522s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105598 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0307 18:23:08.882199  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105598 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m6.837657576s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-105598 -- docker images
helpers_test.go:175: Cleaning up "test-preload-105598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-105598
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105598: (2.743617564s)
--- PASS: TestPreload (135.51s)

                                                
                                    
x
+
TestScheduledStopUnix (101.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-302087 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-302087 --memory=2048 --driver=docker  --container-runtime=docker: (27.035352875s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-302087 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-302087 -n scheduled-stop-302087
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-302087 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-302087 --cancel-scheduled
E0307 18:24:31.928725  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-302087 -n scheduled-stop-302087
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-302087
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-302087 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0307 18:25:17.631629  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-302087
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-302087: exit status 7 (119.264823ms)

                                                
                                                
-- stdout --
	scheduled-stop-302087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-302087 -n scheduled-stop-302087
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-302087 -n scheduled-stop-302087: exit status 7 (116.252515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-302087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-302087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-302087: (2.257910637s)
--- PASS: TestScheduledStopUnix (101.31s)

                                                
                                    
x
+
TestSkaffold (61.35s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1299790752 version
skaffold_test.go:63: skaffold version: v2.2.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-263141 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-263141 --memory=2600 --driver=docker  --container-runtime=docker: (27.683701604s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1299790752 run --minikube-profile skaffold-263141 --kube-context skaffold-263141 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1299790752 run --minikube-profile skaffold-263141 --kube-context skaffold-263141 --status-check=true --port-forward=false --interactive=false: (20.022195352s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7cdb7f47b8-bt8ht" [7649df8e-c73e-4303-ba93-ec3075bdc891] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011159743s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7fc8f8f8f4-6t2s8" [4a2f4536-dbd5-41cc-a20b-b23f71108029] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006407267s
helpers_test.go:175: Cleaning up "skaffold-263141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-263141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-263141: (2.977543968s)
--- PASS: TestSkaffold (61.35s)

                                                
                                    
x
+
TestInsufficientStorage (12.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-584942 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-584942 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.747586661s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b38cbbf-ab96-4ea1-ace4-4e63e76621e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-584942] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bd58bbb-e568-44fa-9707-7ea0de5ab0f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15985"}}
	{"specversion":"1.0","id":"deb42959-888c-42b3-8b38-689136fa55c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"708d2137-6020-4cbe-8b5f-e850e21bce1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig"}}
	{"specversion":"1.0","id":"67548f44-69d5-4cd4-9353-10288b3c53b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube"}}
	{"specversion":"1.0","id":"4dca0560-c0d5-4104-8ad1-eaf2b328843b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8b7ed0d5-d0ae-4fb4-857f-ab8a7c65f36a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80698d51-4e05-47f4-b7d0-5698237174a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ac270cf6-4286-459c-a442-30f2efcd10b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e4189b97-8f35-4a02-a42d-238080fd1599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5998fb03-34dc-45e1-b940-7c358b2e30f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5033ce19-9eaa-4d6c-9690-733b8fb4baf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-584942 in cluster insufficient-storage-584942","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e60548d8-1f9b-48e3-9fc3-f0726667602e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4cc5a13-b13c-4021-8def-1254d2387737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e006bff9-eb0f-43eb-a4b2-a4be10de4fa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-584942 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-584942 --output=json --layout=cluster: exit status 7 (461.241279ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-584942","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-584942","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 18:26:56.121211  882309 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-584942" does not appear in /home/jenkins/minikube-integration/15985-636026/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-584942 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-584942 --output=json --layout=cluster: exit status 7 (454.871515ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-584942","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-584942","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 18:26:56.576595  882506 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-584942" does not appear in /home/jenkins/minikube-integration/15985-636026/kubeconfig
	E0307 18:26:56.584718  882506 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/insufficient-storage-584942/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-584942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-584942
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-584942: (2.296710964s)
--- PASS: TestInsufficientStorage (12.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.2096571520.exe start -p running-upgrade-112063 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.2096571520.exe start -p running-upgrade-112063 --memory=2200 --vm-driver=docker  --container-runtime=docker: (56.574756102s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-112063 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-112063 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.302904108s)
helpers_test.go:175: Cleaning up "running-upgrade-112063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-112063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-112063: (4.95085996s)
--- PASS: TestRunningBinaryUpgrade (91.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (142.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.28753991s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-294542
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-294542: (6.131600267s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-294542 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-294542 status --format={{.Host}}: exit status 7 (153.576484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.297415085s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-294542 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (90.051889ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-294542] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-294542
	    minikube start -p kubernetes-upgrade-294542 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2945422 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-294542 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-294542 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.749044698s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-294542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-294542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-294542: (3.022741403s)
--- PASS: TestKubernetesUpgrade (142.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.1060370613.exe start -p missing-upgrade-298625 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.1060370613.exe start -p missing-upgrade-298625 --memory=2200 --driver=docker  --container-runtime=docker: (1m3.969742373s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-298625
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-298625: (3.168330315s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-298625
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-298625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-298625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.059448661s)
helpers_test.go:175: Cleaning up "missing-upgrade-298625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-298625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-298625: (3.102435053s)
--- PASS: TestMissingContainerUpgrade (115.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (83.27798ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-187744] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-636026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-636026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187744 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187744 --driver=docker  --container-runtime=docker: (47.716730332s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187744 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.687726684.exe start -p stopped-upgrade-205371 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.687726684.exe start -p stopped-upgrade-205371 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.366918603s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.687726684.exe -p stopped-upgrade-205371 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.687726684.exe -p stopped-upgrade-205371 stop: (2.71183533s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-205371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-205371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.379071197s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (81.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --driver=docker  --container-runtime=docker: (16.338618002s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187744 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-187744 status -o json: exit status 2 (632.949601ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-187744","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-187744
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-187744: (2.738149274s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --driver=docker  --container-runtime=docker
E0307 18:28:08.884064  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187744 --no-kubernetes --driver=docker  --container-runtime=docker: (8.905857939s)
--- PASS: TestNoKubernetes/serial/Start (8.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (738.18443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.096603595s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.667317244s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-187744
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-187744: (1.702487795s)
--- PASS: TestNoKubernetes/serial/Stop (1.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-205371
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-205371: (1.90345895s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187744 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187744 --driver=docker  --container-runtime=docker: (11.37812286s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (699.056973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.70s)

                                                
                                    
x
+
TestPause/serial/Start (41.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-594241 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0307 18:30:17.629971  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-594241 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (41.951601837s)
--- PASS: TestPause/serial/Start (41.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-594241 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-594241 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.166548193s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (57.769755734s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.130130191s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-594241 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-594241 --output=json --layout=cluster
E0307 18:31:32.917681  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:32.922989  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:32.933253  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:32.953493  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:32.993851  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-594241 --output=json --layout=cluster: exit status 2 (550.110666ms)

                                                
                                                
-- stdout --
	{"Name":"pause-594241","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-594241","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-594241 --alsologtostderr -v=5
E0307 18:31:33.074622  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:33.235754  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:31:33.556690  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-594241 --alsologtostderr -v=5
E0307 18:31:34.197498  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-594241 --alsologtostderr -v=5
E0307 18:31:35.478158  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-594241 --alsologtostderr -v=5: (2.836269535s)
--- PASS: TestPause/serial/DeletePaused (2.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0307 18:31:38.038729  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-594241
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-594241: exit status 1 (64.959485ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-594241: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0307 18:31:40.674991  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:31:43.159858  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m12.523870647s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-465ns" [5e63731d-c00e-482d-a66d-dcb2c7e81715] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-465ns" [5e63731d-c00e-482d-a66d-dcb2c7e81715] Running
E0307 18:31:53.400790  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006547833s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g544l" [8cf0dd65-e934-41a6-9979-8de4be8ec08e] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015114988s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-ktbtf" [b6264f5d-4346-4ef3-ad80-46cdc93402d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-ktbtf" [b6264f5d-4346-4ef3-ad80-46cdc93402d0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.008058757s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.820765291s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (48.493165854s)
--- PASS: TestNetworkPlugins/group/false/Start (48.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2bhlh" [e5eda714-700d-4566-991a-6d45c3787af2] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017054787s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0307 18:32:54.842446  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (45.445108057s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-49z4w" [47146e15-5481-4042-8f3e-36ec3bdff83d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-49z4w" [47146e15-5481-4042-8f3e-36ec3bdff83d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.036088323s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hzsnw" [8b713223-f8ad-45af-bba6-b9da39aa4c2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-hzsnw" [8b713223-f8ad-45af-bba6-b9da39aa4c2f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.007007075s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jvptz" [eaef22c6-6e21-4fb2-af6e-4818507165ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-jvptz" [eaef22c6-6e21-4fb2-af6e-4818507165ba] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006721769s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-n26p8" [2a3aef25-3552-47ec-b1ab-8ed6b3d92704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-n26p8" [2a3aef25-3552-47ec-b1ab-8ed6b3d92704] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011589823s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m0.70506532s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m29.653513941s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (50.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-942485 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (50.518629792s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (50.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-731373 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-731373 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m4.943444296s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jw4tj" [e4239d42-f516-4fba-a057-c558d4d5ef2a] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015373425s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-s58bw" [709c8fd4-0a37-4f72-bfef-9ca5375a49b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-s58bw" [709c8fd4-0a37-4f72-bfef-9ca5375a49b8] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.014142585s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-47xng" [2dda0a72-eaa6-4bbc-9ae6-60881dea0de4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-47xng" [2dda0a72-eaa6-4bbc-9ae6-60881dea0de4] Running
E0307 18:35:17.630014  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.006182871s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-687149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-687149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (53.268151313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-942485 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-942485 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-fdl6s" [24669886-d6ea-4dc7-9044-b7bc3ad7559f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-fdl6s" [24669886-d6ea-4dc7-9044-b7bc3ad7559f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006539147s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (48.198795825s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-942485 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-942485 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E0307 18:42:50.429564  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:42:51.501971  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:42:55.799424  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:43:08.882151  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:43:19.186623  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-687149 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d82fd1f6-1704-4eb2-9512-c5083d991b3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d82fd1f6-1704-4eb2-9512-c5083d991b3b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012144408s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-687149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-731373 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0bd606d3-19e0-4d7a-a858-d9af66824a16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0bd606d3-19e0-4d7a-a858-d9af66824a16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.014187763s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-731373 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-687149 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-687149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-687149 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-687149 --alsologtostderr -v=3: (11.0464236s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-530651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0307 18:36:32.917886  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-530651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (1m21.819362568s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-731373 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-731373 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-731373 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-731373 --alsologtostderr -v=3: (10.934059293s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-687149 -n no-preload-687149
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-687149 -n no-preload-687149: exit status 7 (122.56052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-687149 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (558.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-687149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-687149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (9m17.977447869s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-687149 -n no-preload-687149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (558.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [60a94f3e-09f2-4577-924d-88ab55af8ace] Pending
helpers_test.go:344: "busybox" [60a94f3e-09f2-4577-924d-88ab55af8ace] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [60a94f3e-09f2-4577-924d-88ab55af8ace] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.013744043s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223656 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731373 -n old-k8s-version-731373
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731373 -n old-k8s-version-731373: exit status 7 (127.45282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-731373 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (337.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-731373 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0307 18:36:47.166217  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.171558  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.181886  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.202163  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.243090  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.324028  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.484856  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:47.805528  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:48.445685  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:49.726390  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-731373 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m37.362086399s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-731373 -n old-k8s-version-731373
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (337.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-223656 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-223656 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-223656 --alsologtostderr -v=3
E0307 18:36:52.287138  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:36:57.407803  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:37:00.603699  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:37:00.890645  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:00.895924  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:00.906315  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:00.926909  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:00.967185  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:01.047541  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:01.207983  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:01.529144  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:02.169925  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-223656 --alsologtostderr -v=3: (10.936997013s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223656 -n embed-certs-223656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223656 -n embed-certs-223656: exit status 7 (157.36487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223656 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0307 18:37:03.450432  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:06.011402  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:07.648686  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:37:11.132262  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:21.372565  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:28.129228  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:37:41.853702  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:37:50.429757  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
E0307 18:37:51.502408  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.507690  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.517978  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.538277  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.578639  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.658993  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:51.819669  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:52.140317  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:37:52.780800  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (5m14.986185379s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223656 -n embed-certs-223656
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530651 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [06ba440a-ee27-4784-ae93-956371710494] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0307 18:37:54.061051  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
helpers_test.go:344: "busybox" [06ba440a-ee27-4784-ae93-956371710494] Running
E0307 18:37:56.621365  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.014150988s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530651 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-530651 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-530651 describe deploy/metrics-server -n kube-system
E0307 18:38:01.742522  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-530651 --alsologtostderr -v=3
E0307 18:38:08.882330  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:38:09.089552  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:38:11.983615  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-530651 --alsologtostderr -v=3: (11.137885226s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651: exit status 7 (127.954243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-530651 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-530651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0307 18:38:22.814375  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:38:32.464426  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:38:32.973822  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:32.979109  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:32.989360  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:33.009660  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:33.049995  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:33.130365  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:33.290835  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:33.611934  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:34.252644  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:35.533421  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:36.204296  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.209591  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.220709  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.241033  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.281329  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.361797  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.522238  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:36.842863  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:37.483430  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:38.094409  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:38.182538  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.187859  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.198181  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.218569  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.258878  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.339252  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.499717  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:38.764428  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:38.820628  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:39.461587  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:40.742595  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:41.325234  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:43.215208  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:43.303435  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:46.446096  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:48.424723  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:38:53.456319  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:38:56.686907  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:38:58.665717  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:39:13.425047  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:39:13.936716  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:39:17.168060  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:39:19.146447  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:39:31.010732  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:39:41.702765  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:41.708066  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:41.718314  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:41.738595  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:41.779048  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:41.859551  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:42.020044  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:42.340671  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:42.981034  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:44.261542  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:44.735208  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:39:46.822685  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:51.943489  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:39:54.897563  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:39:58.128307  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:40:00.107527  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:40:02.184226  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:40:11.955886  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:11.961172  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:11.971462  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:11.991758  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:12.032027  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:12.112406  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:12.272877  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:12.593437  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:13.234356  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:14.515107  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:17.076286  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:17.630503  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/ingress-addon-legacy-437641/client.crt: no such file or directory
E0307 18:40:22.196989  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:22.664599  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:40:32.437163  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:35.345955  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/calico-942485/client.crt: no such file or directory
E0307 18:40:49.405442  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.410711  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.421019  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.441301  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.481629  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.561940  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:49.722374  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:50.043068  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:50.683702  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:51.964625  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:52.917695  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:40:54.524755  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:40:59.645358  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:41:03.624821  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
E0307 18:41:09.885855  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:41:11.929649  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/addons-581908/client.crt: no such file or directory
E0307 18:41:16.817799  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:41:20.048673  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:41:22.028152  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
E0307 18:41:30.366559  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:41:32.917751  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/skaffold-263141/client.crt: no such file or directory
E0307 18:41:33.878617  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kubenet-942485/client.crt: no such file or directory
E0307 18:41:47.165796  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
E0307 18:42:00.890727  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
E0307 18:42:11.326775  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:42:14.851511  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/auto-942485/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-530651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (9m19.425493547s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-s56wv" [1f1cd8fc-519e-4397-89c1-1ea259437198] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-s56wv" [1f1cd8fc-519e-4397-89c1-1ea259437198] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.014925864s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ttqgd" [ec8650f7-a86c-4ff4-a315-69bd0864129b] Running
E0307 18:42:25.544989  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/flannel-942485/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013262499s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-s56wv" [1f1cd8fc-519e-4397-89c1-1ea259437198] Running
E0307 18:42:28.575969  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/kindnet-942485/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005497399s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-223656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ttqgd" [ec8650f7-a86c-4ff4-a315-69bd0864129b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006870027s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-731373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-223656 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-223656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223656 -n embed-certs-223656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223656 -n embed-certs-223656: exit status 2 (542.39329ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-223656 -n embed-certs-223656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-223656 -n embed-certs-223656: exit status 2 (550.169524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-223656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223656 -n embed-certs-223656
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-223656 -n embed-certs-223656
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-731373 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-731373 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731373 -n old-k8s-version-731373
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731373 -n old-k8s-version-731373: exit status 2 (585.252895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731373 -n old-k8s-version-731373
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731373 -n old-k8s-version-731373: exit status 2 (614.252753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-731373 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-731373 -n old-k8s-version-731373
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-731373 -n old-k8s-version-731373
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-853599 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-853599 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (42.168799831s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-853599 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-853599 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-853599 --alsologtostderr -v=3: (5.900914643s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-853599 -n newest-cni-853599
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-853599 -n newest-cni-853599: exit status 7 (123.165061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-853599 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-853599 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0307 18:43:32.973729  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
E0307 18:43:33.247077  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/bridge-942485/client.crt: no such file or directory
E0307 18:43:36.204514  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/false-942485/client.crt: no such file or directory
E0307 18:43:38.182900  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/enable-default-cni-942485/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-853599 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (27.402665882s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-853599 -n newest-cni-853599
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-853599 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-853599 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-853599 -n newest-cni-853599
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-853599 -n newest-cni-853599: exit status 2 (499.719902ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-853599 -n newest-cni-853599
E0307 18:44:00.658994  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/custom-flannel-942485/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-853599 -n newest-cni-853599: exit status 2 (497.893986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-853599 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-853599 -n newest-cni-853599
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-853599 -n newest-cni-853599
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-2zxf7" [7a375665-f1c4-4901-b0b2-c179ac2d7867] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013683633s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-2zxf7" [7a375665-f1c4-4901-b0b2-c179ac2d7867] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007573799s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-687149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-687149 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-687149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-687149 -n no-preload-687149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-687149 -n no-preload-687149: exit status 2 (495.940642ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-687149 -n no-preload-687149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-687149 -n no-preload-687149: exit status 2 (500.648949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-687149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-687149 -n no-preload-687149
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-687149 -n no-preload-687149
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rh2s7" [bfe357a7-42cf-44bc-9790-2d18950db2de] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011839633s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rh2s7" [bfe357a7-42cf-44bc-9790-2d18950db2de] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005654134s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-530651 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-530651 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-530651 --alsologtostderr -v=1
E0307 18:47:43.692988  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/no-preload-687149/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651: exit status 2 (480.497355ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651: exit status 2 (477.602583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-530651 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530651 -n default-k8s-diff-port-530651
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                    

Test skip (19/313)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
E0307 18:27:50.429954  642743 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/functional-706383/client.crt: no such file or directory
panic.go:522: 
----------------------- debugLogs start: cilium-942485 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-942485" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-636026/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 18:27:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-187744
contexts:
- context:
cluster: NoKubernetes-187744
extensions:
- extension:
last-update: Tue, 07 Mar 2023 18:27:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: NoKubernetes-187744
name: NoKubernetes-187744
current-context: NoKubernetes-187744
kind: Config
preferences: {}
users:
- name: NoKubernetes-187744
user:
client-certificate: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/NoKubernetes-187744/client.crt
client-key: /home/jenkins/minikube-integration/15985-636026/.minikube/profiles/NoKubernetes-187744/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-942485

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-942485" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942485"

                                                
                                                
----------------------- debugLogs end: cilium-942485 [took: 4.190332574s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-942485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-942485
--- SKIP: TestNetworkPlugins/group/cilium (4.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-466368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-466368
--- SKIP: TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                    
Copied to clipboard